Science.gov

Sample records for distributed optimization approach

  1. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  2. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  3. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  4. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  5. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  6. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  7. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  8. Pinning distributed synchronization of stochastic dynamical networks: a mixed optimization approach.

    PubMed

    Tang, Yang; Gao, Huijun; Lu, Jianquan; Kurths, Jürgen Kurthsrgen

    2014-10-01

    This paper is concerned with the problem of pinning synchronization of nonlinear dynamical networks with multiple stochastic disturbances. Two kinds of pinning schemes are considered: 1) pinned nodes are fixed along the time evolution and 2) pinned nodes are switched from time to time according to a set of Bernoulli stochastic variables. Using Lyapunov function methods and stochastic analysis techniques, several easily verifiable criteria are derived for the problem of pinning distributed synchronization. For the case of fixed pinned nodes, a novel mixed optimization method is developed to select the pinned nodes and find feasible solutions, which is composed of a traditional convex optimization method and a constraint optimization evolutionary algorithm. For the case of switching pinning scheme, upper bounds of the convergence rate and the mean control gain are obtained theoretically. Simulation examples are provided to show the advantages of our proposed optimization method over previous ones and verify the effectiveness of the obtained results. PMID:25291734

  9. A new systems approach to optimizing investments in gas production and distribution

    SciTech Connect

    Dougherty, E.L.

    1983-03-01

    This paper presents a new analytical approach for determining the optimal sequence of investments to make in each year of an extended planning horizon in each of a group of reservoirs producing gas and gas liquids through an interconnected trunkline network and a gas processing plant. The optimality criterion is to maximize net present value while satisfying fixed offtake requirements for dry gas, but with no limits on gas liquids production. The planning problem is broken into n + 2 separate but interrelated subproblems; gas reservoir development and production, gas flow in a trunkline gathering system, and plant separation activities to remove undesirable gas (CO/sub 2/) or to recover valuable liquid components. The optimal solution for each subproblem depends upon the optimal solutions for all of the other subproblems, so that the overall optimal solution is obtained iteratively. The iteration technique used is based upon a combination of heuristics and the decompostion algorithm of mathematical programming. Each subproblem is solved once during each overall iteration. In addition to presenting some mathematical details of the solution approach, this paper describes a computer system which has been developed to obtain solutions.

  10. Learning Based Approach for Optimal Clustering of Distributed Program's Call Flow Graph

    NASA Astrophysics Data System (ADS)

    Abofathi, Yousef; Zarei, Bager; Parsa, Saeed

    Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.

  11. Anthropogenic carbon estimates in the Weddell Sea using an optimized CFC based transit time distribution approach

    NASA Astrophysics Data System (ADS)

    Huhn, Oliver; Hauck, Judith; Hoppema, Mario; Rhein, Monika; Roether, Wolfgang

    2010-05-01

    We use a 20 year time series of chlorofluorocarbon (CFC) observations along the Prime Meridian to determine the temporal evolution of anthropogenic carbon (Cant) in the two deep boundary currents which enter the Weddell Basin in the south and leave it in the north. The Cant is inferred from transit time distributions (TTDs), with parameters (mean transit time and dispersion) adjusted to the observed mean CFC histories in these recently ventilated deep boundary currents. We optimize that "classic" TTD approach by accounting for water exchange of the boundary currents with an old but not CFC and Cant free interior reservoir. This reservoir in turn, is replenished by the boundary currents, which we parameterize as first order mixing. Furthermore, we account for the time-dependence of the CFC and Cant source water saturation. A conceptual model of an ideal saturated mixed layer and exchange with adjacent water is adjusted to observed CFC saturations in the source regions. The time-dependence for the CFC saturation appears to be much weaker than for Cant. We find a mean transit time of 14 years and an advection/dispersion ratio of 5 for the deep southern boundary current. For the northern boundary current we find a mean transit time of 8 years and a much advection/dispersion ratio of 140. The fractions directly supplied by the boundary currents are in both cases in the order of 10%, while 90% are admixed from the interior reservoirs, which are replenished with a renewal time of about 14 years. We determine Cant ~ 11 umol/kg (reference year 2006) in the deep water entering the Weddell Sea in the south (~2.1 Sv), and 12 umol/kg for the deep water leaving the Weddell Sea in the north (~2.7 Sv). These Cant estimates are, however, upper limits, considering that the Cant source water saturation is likely to be lower than that for the CFCs. Comparison with Cant intrusion estimates based on extended multiple linear regression (using potential temperature, salinity, oxygen, and

  12. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  13. Dealing with Noisy Absences to Optimize Species Distribution Models: An Iterative Ensemble Modelling Approach

    PubMed Central

    Lauzeral, Christine; Grenouillet, Gaël; Brosse, Sébastien

    2012-01-01

    Species distribution models (SDMs) are widespread in ecology and conservation biology, but their accuracy can be lowered by non-environmental (noisy) absences that are common in species occurrence data. Here we propose an iterative ensemble modelling (IEM) method to deal with noisy absences and hence improve the predictive reliability of ensemble modelling of species distributions. In the IEM approach, outputs of a classical ensemble model (EM) were used to update the raw occurrence data. The revised data was then used as input for a new EM run. This process was iterated until the predictions stabilized. The outputs of the iterative method were compared to those of the classical EM using virtual species. The IEM process tended to converge rapidly. It increased the consensus between predictions provided by the different methods as well as between those provided by different learning data sets. Comparing IEM and EM showed that for high levels of non-environmental absences, iterations significantly increased prediction reliability measured by the Kappa and TSS indices, as well as the percentage of well-predicted sites. Compared to EM, IEM also reduced biases in estimates of species prevalence. Compared to the classical EM method, IEM improves the reliability of species predictions. It particularly deals with noisy absences that are replaced in the data matrices by simulated presences during the iterative modelling process. IEM thus constitutes a promising way to increase the accuracy of EM predictions of difficult-to-detect species, as well as of species that are not in equilibrium with their environment. PMID:23166691

  14. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  15. Distributed Energy Resource Optimization Using a Software as Service (SaaS) Approach at the University of California, Davis Campus

    SciTech Connect

    Stadler, Michael; Marnay, Chris; Donadee, Jon; Lai, Judy; Megel, Olivier; Bhattacharya, Prajesh; Siddiqui, Afzal

    2011-02-06

    Together with OSIsoft LLC as its private sector partner and matching sponsor, the Lawrence Berkeley National Laboratory (Berkeley Lab) won an FY09 Technology Commercialization Fund (TCF) grant from the U.S. Department of Energy. The goal of the project is to commercialize Berkeley Lab's optimizing program, the Distributed Energy Resources Customer Adoption Model (DER-CAM) using a software as a service (SaaS) model with OSIsoft as its first non-scientific user. OSIsoft could in turn provide optimization capability to its software clients. In this way, energy efficiency and/or carbon minimizing strategies could be made readily available to commercial and industrial facilities. Specialized versions of DER-CAM dedicated to solving OSIsoft's customer problems have been set up on a server at Berkeley Lab. The objective of DER-CAM is to minimize the cost of technology adoption and operation or carbon emissions, or combinations thereof. DER-CAM determines which technologies should be installed and operated based on specific site load, price information, and performance data for available equipment options. An established user of OSIsoft's PI software suite, the University of California, Davis (UCD), was selected as a demonstration site for this project. UCD's participation in the project is driven by its motivation to reduce its carbon emissions. The campus currently buys electricity economically through the Western Area Power Administration (WAPA). The campus does not therefore face compelling cost incentives to improve the efficiency of its operations, but is nonetheless motivated to lower the carbon footprint of its buildings. Berkeley Lab attempted to demonstrate a scenario wherein UCD is forced to purchase electricity on a standard time-of-use tariff from Pacific Gas and Electric (PG&E), which is a concern to Facilities staff. Additionally, DER-CAM has been set up to consider the variability of carbon emissions throughout the day and seasons. Two distinct analyses of

  16. Retrieval of ice crystals' mass from ice water content and particle distribution measurements: a numerical optimization approach

    NASA Astrophysics Data System (ADS)

    Coutris, Pierre; Leroy, Delphine; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    A new method to retrieve cloud water content from in-situ measured 2D particle images from optical array probes (OAP) is presented. With the overall objective to build a statistical model of crystals' mass as a function of their size, environmental temperature and crystal microphysical history, this study presents the methodology to retrieve the mass of crystals sorted by size from 2D images using a numerical optimization approach. The methodology is validated using two datasets of in-situ measurements gathered during two airborne field campaigns held in Darwin, Australia (2014), and Cayenne, France (2015), in the frame of the High Altitude Ice Crystals (HAIC) / High Ice Water Content (HIWC) projects. During these campaigns, a Falcon F-20 research aircraft equipped with state-of-the art microphysical instrumentation sampled numerous mesoscale convective systems (MCS) in order to study dynamical and microphysical properties and processes of high ice water content areas. Experimentally, an isokinetic evaporator probe, referred to as IKP-2, provides a reference measurement of the total water content (TWC) which equals ice water content, (IWC) when (supercooled) liquid water is absent. Two optical array probes, namely 2D-S and PIP, produce 2D images of individual crystals ranging from 50 μm to 12840 μm from which particle size distributions (PSD) are derived. Mathematically, the problem is formulated as an inverse problem in which the crystals' mass is assumed constant over a size class and is computed for each size class from IWC and PSD data: PSD.m = IW C This problem is solved using numerical optimization technique in which an objective function is minimized. The objective function is defined as follows: 2 J(m)=∥P SD.m ‑ IW C ∥ + λ.R (m) where the regularization parameter λ and the regularization function R(m) are tuned based on data characteristics. The method is implemented in two steps. First, the method is developed on synthetic crystal populations in

  17. Multicriteria optimization of the spatial dose distribution

    SciTech Connect

    Schlaefer, Alexander; Viulet, Tiberiu; Muacevic, Alexander; Fürweger, Christoph

    2013-12-15

    Purpose: Treatment planning for radiation therapy involves trade-offs with respect to different clinical goals. Typically, the dose distribution is evaluated based on few statistics and dose–volume histograms. Particularly for stereotactic treatments, the spatial dose distribution represents further criteria, e.g., when considering the gradient between subregions of volumes of interest. The authors have studied how to consider the spatial dose distribution using a multicriteria optimization approach.Methods: The authors have extended a stepwise multicriteria optimization approach to include criteria with respect to the local dose distribution. Based on a three-dimensional visualization of the dose the authors use a software tool allowing interaction with the dose distribution to map objectives with respect to its shape to a constrained optimization problem. Similarly, conflicting criteria are highlighted and the planner decides if and where to relax the shape of the dose distribution.Results: To demonstrate the potential of spatial multicriteria optimization, the tool was applied to a prostate and meningioma case. For the prostate case, local sparing of the rectal wall and shaping of a boost volume are achieved through local relaxations and while maintaining the remaining dose distribution. For the meningioma, target coverage is improved by compromising low dose conformality toward noncritical structures. A comparison of dose–volume histograms illustrates the importance of spatial information for achieving the trade-offs.Conclusions: The results show that it is possible to consider the location of conflicting criteria during treatment planning. Particularly, it is possible to conserve already achieved goals with respect to the dose distribution, to visualize potential trade-offs, and to relax constraints locally. Hence, the proposed approach facilitates a systematic exploration of the optimal shape of the dose distribution.

  18. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  19. Hybrid centralized pre-computing/local distributed optimization of shared disjoint-backup path approach to GMPLS optical mesh network intelligent restoration

    NASA Astrophysics Data System (ADS)

    Gong, Qian; Xu, Rong; Lin, Jintong

    2004-04-01

    Wavelength Division Multiplexed (WDM) networks that route optical connections using intelligent optical cross-connects (OXCs) is firmly established as the core constituent of next generation networks. Rapid failure recovery is fundamental to building reliable transport networks. Mesh restoration promises cost effective failure recovery compared with legacy ring networks, and is now seeing large-scale deployment. Many carriers are migrating away from SONET ring restoration for their core transport networks and replacing it with mesh restoration through "intelligent" O-E-O cross-connects (XC). The mesh restoration is typically provided via two fiber-disjoint paths: a service path and a restoration path. this scheme can restore any single link failure or node failure. And by used shared mesh restoration, although every service route is assigned a restoration route, no dedicated capacity needs to be reserved for the restoration route, resulting in capacity savings. The restoration approach we propose is Centralized Pre-computing, Local Distributed Optimization, and Shared Disjoint-backup Path. This approach combines the merits of centralized and distributed solutions. It avoids the scalability issues of centralized solutions by using a distributed control plane for disjoint service path computation and restoration path provisioning. Moreover, if the service routes of two demands are disjoint, no single failure will affect both demands simultaneously. This means that the restoration routes of these two demands can share link capacities, because these two routes will not be activated at the same time. So we can say, this restoration capacity sharing approach achieves low restoration capacity and fast restoration speed, while requiring few control plane changes.

  20. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  1. Optimal dynamic control of resources in a distributed system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang

    1989-01-01

    The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.

  2. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  3. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  4. Quantum optimal control of photoelectron spectra and angular distributions

    NASA Astrophysics Data System (ADS)

    Goetz, R. Esteban; Karamatskou, Antonia; Santra, Robin; Koch, Christiane P.

    2016-01-01

    Photoelectron spectra and photoelectron angular distributions obtained in photoionization reveal important information on, e.g., charge transfer or hole coherence in the parent ion. Here we show that optimal control of the underlying quantum dynamics can be used to enhance desired features in the photoelectron spectra and angular distributions. To this end, we combine Krotov's method for optimal control theory with the time-dependent configuration interaction singles formalism and a splitting approach to calculate photoelectron spectra and angular distributions. The optimization target can account for specific desired properties in the photoelectron angular distribution alone, in the photoelectron spectrum, or in both. We demonstrate the method for hydrogen and then apply it to argon under strong XUV radiation, maximizing the difference of emission into the upper and lower hemispheres, in order to realize directed electron emission in the XUV regime.

  5. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field. PMID:20492109

  6. Energy optimization of water distribution systems

    SciTech Connect

    1994-09-01

    Energy costs associated with pumping treated water into the distribution system and boosting water pressures where necessary is one of the largest expenditures in the operating budget of a municipality. Due to the size and complexity of Detroit`s water transmission system, an energy optimization project has been developed to better manage the flow of water in the distribution system in an attempt to reduce these costs.

  7. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  8. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  9. Distributed Optimization and Games: A Tutorial Overview

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Johansson, Mikael

    This chapter provides a tutorial overview of distributed optimization and game theory for decision-making in networked systems. We discuss properties of first-order methods for smooth and non-smooth convex optimization, and review mathematical decomposition techniques. A model of networked decision-making is introduced in which a communication structure is enforced that determines which nodes are allowed to coordinate with each other, and several recent techniques for solving such problems are reviewed. We then continue to study the impact of noncooperative games, in which no communication and coordination are enforced. Special attention is given to existence and uniqueness of Nash equilibria, as well as the efficiency loss in not coordinating nodes. Finally, we discuss methods for studying the dynamics of distributed optimization algorithms in continuous time.

  10. Optimal Distributed Excitation of Surface Wave Plasmas

    NASA Astrophysics Data System (ADS)

    Bowers, K. J.; Birdsall, C. K.

    2000-10-01

    Surface wave sustained plasmas are an emerging technology for next generation sources for material processing. There is promise of producing high density, uniform sheath plasmas at low neutral pressures over large target surface areas. Such plasmas are being produced by distributed arrays of slot antennas by numerous groups. However, work remains to obtain the optimal surface wave frequency and wave vector for sustaining a plasma. In this work, the optimal phase shift between slot antennas in a surface wave plasma is being sought using 2d3v PIC-MCC simulation. A long plasma loaded planar metal waveguide with a distributed exciting structure along one wall is modeled in these simulations. Of particular interest is the wave-particle interaction of electrons in the high energy tail of the velocity distribution (responsible for ionization in low pressure discharges) with driven low phase velocity (v << c) surface waves.

  11. Optimal Device Independent Quantum Key Distribution

    PubMed Central

    Kamaruddin, S.; Shaari, J. S.

    2016-01-01

    We consider an optimal quantum key distribution setup based on minimal number of measurement bases with binary yields used by parties against an eavesdropper limited only by the no-signaling principle. We note that in general, the maximal key rate can be achieved by determining the optimal tradeoff between measurements that attain the maximal Bell violation and those that maximise the bit correlation between the parties. We show that higher correlation between shared raw keys at the expense of maximal Bell violation provide for better key rates for low channel disturbance. PMID:27485160

  12. Optimal Device Independent Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kamaruddin, S.; Shaari, J. S.

    2016-08-01

    We consider an optimal quantum key distribution setup based on minimal number of measurement bases with binary yields used by parties against an eavesdropper limited only by the no-signaling principle. We note that in general, the maximal key rate can be achieved by determining the optimal tradeoff between measurements that attain the maximal Bell violation and those that maximise the bit correlation between the parties. We show that higher correlation between shared raw keys at the expense of maximal Bell violation provide for better key rates for low channel disturbance.

  13. Optimal Device Independent Quantum Key Distribution.

    PubMed

    Kamaruddin, S; Shaari, J S

    2016-01-01

    We consider an optimal quantum key distribution setup based on minimal number of measurement bases with binary yields used by parties against an eavesdropper limited only by the no-signaling principle. We note that in general, the maximal key rate can be achieved by determining the optimal tradeoff between measurements that attain the maximal Bell violation and those that maximise the bit correlation between the parties. We show that higher correlation between shared raw keys at the expense of maximal Bell violation provide for better key rates for low channel disturbance. PMID:27485160

  14. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider

  15. Optimized approach to retrieve information on the tropospheric and stratospheric carbonyl sulfide (OCS) vertical distributions above Jungfraujoch from high-resolution FTIR solar spectra.

    NASA Astrophysics Data System (ADS)

    Lejeune, Bernard; Mahieu, Emmanuel; Servais, Christian; Duchatelet, Pierre; Demoulin, Philippe

    2010-05-01

    Carbonyl sulfide (OCS), which is produced in the troposphere from both biogenic and anthropogenic sources, is the most abundant gaseous sulfur species in the unpolluted atmosphere. Due to its low chemical reactivity and water solubility, a significant fraction of OCS is able to reach the stratosphere where it is converted to SO2 and ultimately to H2SO4 aerosols (Junge layer). These aerosols have the potential to amplify stratospheric ozone destruction on a global scale and may influence Earth's radiation budget and climate through increasing solar scattering. The transport of OCS from troposphere to stratosphere is thought to be the primary mechanism by which the Junge layer is sustained during nonvolcanic periods. Because of this, long-term trends in atmospheric OCS concentration, not only in the troposphere but also in the stratosphere, are of great interest. A new approach has been developed and optimized to retrieve atmospheric abundance of OCS from high-resolution ground-based infrared solar spectra by using the SFIT-2 (v3.91) algorithm, including a new model for solar lines simulation (solar lines often produce significant interferences in the OCS microwindows). The strongest lines of the ν3 fundamental band of OCS at 2062 cm-1 have been systematically evaluated with objective criteria to select a new set of microwindows, assuming the HITRAN 2004 spectroscopic parameters with an increase in the OCS line intensities of the ν3band main isotopologue 16O12C32S by 15.79% as compared to HITRAN 2000 (Rothman et al., 2008, and references therein). Two regularization schemes have further been compared (deducted from ATMOS and ACE-FTS measurements or based on a Tikhonov approach), in order to select the one which optimizes the information content while minimizing the error budget. The selected approach has allowed us to determine updated OCS long-term trend from 1988 to 2009 in both the troposphere and the stratosphere, using spectra recorded on a regular basis with

  16. A flexible approach to distributed data anonymization.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Eckert, Claudia; Kuhn, Klaus A

    2014-08-01

    Sensitive biomedical data is often collected from distributed sources, involving different information systems and different organizational units. Local autonomy and legal reasons lead to the need of privacy preserving integration concepts. In this article, we focus on anonymization, which plays an important role for the re-use of clinical data and for the sharing of research data. We present a flexible solution for anonymizing distributed data in the semi-honest model. Prior to the anonymization procedure, an encrypted global view of the dataset is constructed by means of a secure multi-party computing (SMC) protocol. This global representation can then be anonymized. Our approach is not limited to specific anonymization algorithms but provides pre- and postprocessing for a broad spectrum of algorithms and many privacy criteria. We present an extensive analytical and experimental evaluation and discuss which types of methods and criteria are supported. Our prototype demonstrates the approach by implementing k-anonymity, ℓ-diversity, t-closeness and δ-presence with a globally optimal de-identification method in horizontally and vertically distributed setups. The experiments show that our method provides highly competitive performance and offers a practical and flexible solution for anonymizing distributed biomedical datasets. PMID:24333850

  17. Bayesian approach to global discrete optimization

    SciTech Connect

    Mockus, J.; Mockus, A.; Mockus, L.

    1994-12-31

    We discuss advantages and disadvantages of the Bayesian approach (average case analysis). We present the portable interactive version of software for continuous global optimization. We consider practical multidimensional problems of continuous global optimization, such as optimization of VLSI yield, optimization of composite laminates, estimation of unknown parameters of bilinear time series. We extend Bayesian approach to discrete optimization. We regard the discrete optimization as a multi-stage decision problem. We assume that there exists some simple heuristic function which roughly predicts the consequences of the decisions. We suppose randomized decisions. We define the probability of the decision by the randomized decision function depending on heuristics. We fix this function with exception of some parameters. We repeat the randomized decision several times at the fixed values of those parameters and accept the best decision as the result. We optimize the parameters of the randomized decision function to make the search more efficient. Thus we reduce the discrete optimization problem to the continuous problem of global stochastic optimization. We solve this problem by the Bayesian methods of continuous global optimization. We describe the applications to some well known An problems of discrete programming, such as knapsack, traveling salesman, and scheduling.

  18. Numerical approach for unstructured quantum key distribution.

    PubMed

    Coles, Patrick J; Metodiev, Eric M; Lütkenhaus, Norbert

    2016-01-01

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study 'unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739

  19. Numerical approach for unstructured quantum key distribution

    NASA Astrophysics Data System (ADS)

    Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert

    2016-05-01

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study `unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown.

  20. Numerical approach for unstructured quantum key distribution

    PubMed Central

    Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert

    2016-01-01

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study ‘unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739

  1. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  2. Optimal smoothing of site-energy distributions from adsorption isotherms

    SciTech Connect

    Brown, L.F.; Travis, B.J.

    1983-01-01

    The equation for the adsorption isotherm on a heterogeneous surface is a Fredholm integral equation. In solving it for the site-energy distribution (SED), some sort of smoothing must be carried out. The optimal amount of smoothing will give the most information that is possible without introducing nonexistent structure into the SED. Recently, Butler, Reeds, and Dawson proposed a criterion (the BRD criterion) for choosing the optimal smoothing parameter when using regularization to solve Fredholm equations. The BRD criterion is tested for its suitability in obtaining optimal SED's. This criterion is found to be too conservative. While using it never introduces nonexistent structure into the SED, significant information is often lost. At present, no simple criterion for choosing the optimal smoothing parameter exists, and a modeling approach is recommended.

  3. Optimal design of spatial distribution networks

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.; Newman, M. E. J.

    2006-07-01

    We consider the problem of constructing facilities such as hospitals, airports, or malls in a country with a nonuniform population density, such that the average distance from a person’s home to the nearest facility is minimized. We review some previous approximate treatments of this problem that indicate that the optimal distribution of facilities should have a density that increases with population density, but does so slower than linearly, as the two-thirds power. We confirm this result numerically for the particular case of the United States with recent population data using two independent methods, one a straightforward regression analysis, the other based on density-dependent map projections. We also consider strategies for linking the facilities to form a spatial network, such as a network of flights between airports, so that the combined cost of maintenance of and travel on the network is minimized. We show specific examples of such optimal networks for the case of the United States.

  4. Optimal operation of a potable water distribution network.

    PubMed

    Biscos, C; Mulholland, M; Le Lann, M V; Brouckaert, C J; Bailey, R; Roustan, M

    2002-01-01

    This paper presents an approach to an optimal operation of a potable water distribution network. The main control objective defined during the preliminary steps was to maximise the use of low-cost power, maintaining at the same time minimum emergency levels in all reservoirs. The combination of dynamic elements (e.g. reservoirs) and discrete elements (pumps, valves, routing) makes this a challenging predictive control and constrained optimisation problem, which is being solved by MINLP (Mixed Integer Non-linear Programming). Initial experimental results show the performance of this algorithm and its ability to control the water distribution process. PMID:12448464

  5. Optimizing Distribution of Pandemic Influenza Antiviral Drugs

    PubMed Central

    Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.

    2015-01-01

    We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858

  6. Optimizing the Distribution of Leg Muscles for Vertical Jumping

    PubMed Central

    Wong, Jeremy D.; Bobbert, Maarten F.; van Soest, Arthur J.; Gribble, Paul L.; Kistemaker, Dinant A.

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas—which determine the maximum force deliverable by the muscles—constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal

  7. Optimizing the Distribution of Leg Muscles for Vertical Jumping.

    PubMed

    Wong, Jeremy D; Bobbert, Maarten F; van Soest, Arthur J; Gribble, Paul L; Kistemaker, Dinant A

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas-which determine the maximum force deliverable by the muscles-constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal

  8. Quantum Resonance Approach to Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  9. Autoadaptivity and optimization in distributed ECG interpretation.

    PubMed

    Augustyniak, Piotr

    2010-03-01

    This paper addresses principal issues of the ECG interpretation adaptivity in a distributed surveillance network. In the age of pervasive access to wireless digital communication, distributed biosignal interpretation networks may not only optimally solve difficult medical cases, but also adapt the data acquisition, interpretation, and transmission to the variable patient's status and availability of technical resources. The background of such adaptivity is the innovative use of results from the automatic ECG analysis to the seamless remote modification of the interpreting software. Since the medical relevance of issued diagnostic data depends on the patient's status, the interpretation adaptivity implies the flexibility of report content and frequency. Proposed solutions are based on the research on human experts behavior, procedures reliability, and usage statistics. Despite the limited scale of our prototype client-server application, the tests yielded very promising results: the transmission channel occupation was reduced by 2.6 to 5.6 times comparing to the rigid reporting mode and the improvement of the remotely computed diagnostic outcome was achieved in case of over 80% of software adaptation attempts. PMID:20064764

  10. Steam distribution and energy delivery optimization using wireless sensors

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Djouadi, Seddik M; Lake, Joe E

    2011-01-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  11. Steam distribution and energy delivery optimization using wireless sensors

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Allgood, Glenn O.; Kuruganti, Teja P.; Sukumar, Sreenivas R.; Djouadi, Seddik M.; Lake, Joe E.

    2011-05-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  12. Reliability analysis and optimization in the design of distributed systems

    SciTech Connect

    Hariri, S.

    1986-01-01

    Reliability measures and efficient evaluation algorithms are presented to aid in designing reliable distributed systems. The terminal reliability between a pair of computers is a good measure in computer networks. For distributed systems, to capture more effectively the redundancy in resources, such as programs and files, two new reliability measures are introduced. These measures are Distributed Program Reliability (DPR) and Distributed System Reliability (DSR). A simple and efficient algorithm, SYREL, is developed to evaluate the reliability between two computing centers. This algorithm incorporates conditional probability, set theory, and Boolean algebra in a distinct approach to achieve fast execution times and obtain compact expressions. An elegant and unified approach based on graph-theoretic techniques is used in developing algorithms to evaluate DPR and DSR measures. It performs a breadth-first search on the graph representing a given distributed system to enumerate all the subgraphs that guarantee the proper accessibility for executing the given tasks(s). These subgraphs are then used to evaluate the desired reliabilities. Several optimization algorithms are developed for designing reliable systems under a cost constraint.

  13. Multidisciplinary Approach to Linear Aerospike Nozzle Optimization

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.

  14. Cancer Behavior: An Optimal Control Approach

    PubMed Central

    Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.

    2009-01-01

    With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736

  15. A Bayesian approach to optimizing cryopreservation protocols

    PubMed Central

    2015-01-01

    Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors) as preliminary meta-data, a decision tree learning analysis (DTLA) was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents), loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC) method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach. PMID:26131379

  16. Optimization approaches for planning external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  17. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  18. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  19. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  20. Optimized Dose Distribution of Gammamed Plus Vaginal Cylinders

    SciTech Connect

    Supe, Sanjay S. Bijina, T.K.; Varatharaj, C.; Shwetha, B.; Arunkumar, T.; Sathiyan, S.; Ganesh, K.M.; Ravikumar, M.

    2009-04-01

    Endometrial carcinoma is the most common malignancy arising in the female genital tract. Intracavitary vaginal cuff irradiation may be given alone or with external beam irradiation in patients determined to be at risk for locoregional recurrence. Vaginal cylinders are often used to deliver a brachytherapy dose to the vaginal apex and upper vagina or the entire vaginal surface in the management of postoperative endometrial cancer or cervical cancer. The dose distributions of HDR vaginal cylinders must be evaluated carefully, so that clinical experiences with LDR techniques can be used in guiding optimal use of HDR techniques. The aim of this study was to optimize dose distribution for Gammamed plus vaginal cylinders. Placement of dose optimization points was evaluated for its effect on optimized dose distributions. Two different dose optimization point models were used in this study, namely non-apex (dose optimization points only on periphery of cylinder) and apex (dose optimization points on periphery and along the curvature including the apex points). Thirteen dwell positions were used for the HDR dosimetry to obtain a 6-cm active length. Thus 13 optimization points were available at the periphery of the cylinder. The coordinates of the points along the curvature depended on the cylinder diameters and were chosen for each cylinder so that four points were distributed evenly in the curvature portion of the cylinder. Diameter of vaginal cylinders varied from 2.0 to 4.0 cm. Iterative optimization routine was utilized for all optimizations. The effects of various optimization routines (iterative, geometric, equal times) was studied for the 3.0-cm diameter vaginal cylinder. The effect of source travel step size on the optimized dose distributions for vaginal cylinders was also evaluated. All optimizations in this study were carried for dose of 6 Gy at dose optimization points. For both non-apex and apex models of vaginal cylinders, doses for apex point and three dome

  1. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  2. A Simulation Optimization Approach to Epidemic Forecasting.

    PubMed

    Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  3. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  4. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  5. Optimality of collective choices: a stochastic approach.

    PubMed

    Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L

    2003-09-01

    Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251

  6. Optimal Statistical Approach to Optoacoustic Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Zhulina, Yulia V.

    2000-11-01

    An optimal statistical approach is applied to the task of image reconstruction in photoacoustics. The physical essence of the task is as follows: Pulse laser irradiation induces an ultrasound wave on the inhomogeneities inside the investigated volume. This acoustic wave is received by the set of receivers outside this volume. It is necessary to reconstruct a spatial image of these inhomogeneities. Developed mathematical techniques of the radio location theory are used for solving the task. An algorithm of maximum likelihood is synthesized for the image reconstruction. The obtained algorithm is investigated by digital modeling. The number of receivers and their disposition in space are arbitrary. Results of the synthesis are applied to noninvasive medical diagnostics (breast cancer). The capability of the algorithm is tested on real signals. The image is built with use of signals obtained in vitro . The essence of the algorithm includes (i) summing of all signals in the image plane with the transform from the time coordinates of signals to the spatial coordinates of the image and (ii) optimal spatial filtration of this sum. The results are shown in the figures.

  7. Optimal distributions for multiplex logistic networks.

    PubMed

    Solá Conde, Luis E; Used, Javier; Romance, Miguel

    2016-06-01

    This paper presents some mathematical models for distribution of goods in logistic networks based on spectral analysis of complex networks. Given a steady distribution of a finished product, some numerical algorithms are presented for computing the weights in a multiplex logistic network that reach the equilibrium dynamics with high convergence rate. As an application, the logistic networks of Germany and Spain are analyzed in terms of their convergence rates. PMID:27368801

  8. Optimal distributions for multiplex logistic networks

    NASA Astrophysics Data System (ADS)

    Solá Conde, Luis E.; Used, Javier; Romance, Miguel

    2016-06-01

    This paper presents some mathematical models for distribution of goods in logistic networks based on spectral analysis of complex networks. Given a steady distribution of a finished product, some numerical algorithms are presented for computing the weights in a multiplex logistic network that reach the equilibrium dynamics with high convergence rate. As an application, the logistic networks of Germany and Spain are analyzed in terms of their convergence rates.

  9. Optimality of nitrogen distribution among leaves in plant canopies.

    PubMed

    Hikosaka, Kouki

    2016-05-01

    The vertical gradient of the leaf nitrogen content in a plant canopy is one of the determinants of vegetation productivity. The ecological significance of the nitrogen distribution in plant canopies has been discussed in relation to its optimality; nitrogen distribution in actual plant canopies is close to but always less steep than the optimal distribution that maximizes canopy photosynthesis. In this paper, I review the optimality of nitrogen distribution within canopies focusing on recent advancements. Although the optimal nitrogen distribution has been believed to be proportional to the light gradient in the canopy, this rule holds only when diffuse light is considered; the optimal distribution is steeper when the direct light is considered. A recent meta-analysis has shown that the nitrogen gradient is similar between herbaceous and tree canopies when it is expressed as the function of the light gradient. Various hypotheses have been proposed to explain why nitrogen distribution is suboptimal. However, hypotheses explain patterns observed in some specific stands but not in others; there seems to be no general hypothesis that can explain the nitrogen distributions under different conditions. Therefore, how the nitrogen distribution in canopies is determined remains open for future studies; its understanding should contribute to the correct prediction and improvement of plant productivity under changing environments. PMID:27059755

  10. Inversion of generalized relaxation time distributions with optimized damping parameter

    NASA Astrophysics Data System (ADS)

    Florsch, Nicolas; Revil, André; Camerlynck, Christian

    2014-10-01

    Retrieving the Relaxation Time Distribution (RDT), the Grains Size Distribution (GSD) or the Pore Size Distribution (PSD) from low-frequency impedance spectra is a major goal in geophysics. The “Generalized RTD” generalizes parametric models like Cole-Cole and many others, but remains tricky to invert since this inverse problem is ill-posed. We propose to use generalized relaxation basis function (for instance by decomposing the spectra on basis of generalized Cole-Cole relaxation elements instead of the classical Debye basis) and to use the L-curve approach to optimize the damping parameter required to get smooth and realistic inverse solutions. We apply our algorithm to three examples, one synthetic and two real data sets, and the program includes the possibility of converting the RTD into GSD or PSD by choosing the value of the constant connecting the relaxation time to the characteristic polarization size of interest. A high frequencies (typically above 1 kHz), a dielectric term in taken into account in the model. The code is provided as an open Matlab source as a supplementary file associated with this paper.

  11. Inspection-Repair based Availability Optimization of Distribution Systems using Teaching Learning based Optimization

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2015-03-01

    This paper describes a technique for optimizing inspection and repair based availability of distribution systems. Optimum duration between two inspections has been obtained for each feeder section with respect to cost function and subject to satisfaction of availability at each load point. Teaching learning based optimization has been used for availability optimization. The developed algorithm has been implemented on radial and meshed distribution systems. The result obtained has been compared with those obtained with differential evolution.

  12. Parallel Harmony Search Based Distributed Energy Resource Optimization

    SciTech Connect

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  13. Optimal control of vaccine distribution in a rabies metapopulation model.

    PubMed

    Asano, Erika; Gross, Louis J; Lenhart, Suzanne; Real, Leslie A

    2008-04-01

    We consider an SIR metapopulation model for the spread of rabies in raccoons. This system of ordinary differential equations considers subpopulations connected by movement. Vaccine for raccoons is distributed through food baits. We apply optimal control theory to find the best timing for distribution of vaccine in each of the linked subpopulations across the landscape. This strategy is chosen to limit the disease optimally by making the number of infections as small as possible while accounting for the cost of vaccination. PMID:18613731

  14. Distributed optimization of resource allocation for search and track assignment with multifunction radars

    NASA Astrophysics Data System (ADS)

    Severson, Tracie Andrusiak

    The long-term goal of this research is to contribute to the design of a conceptual architecture and framework for the distributed coordination of multifunction radar systems. The specific research objective of this dissertation is to apply results from graph theory, probabilistic optimization, and consensus control to the problem of distributed optimization of resource allocation for multifunction radars coordinating on their search and track assignments. For multiple radars communicating on a radar network, cooperation and agreement on a network resource management strategy increases the group's collective search and track capability as compared to non-cooperative radars. Existing resource management approaches for a single multifunction radar optimize the radar's configuration by modifying the radar waveform and beam-pattern. Also, multi-radar approaches implement a top-down, centralized sensor management framework that relies on fused sensor data, which may be impractical due to bandwidth constraints. This dissertation presents a distributed radar resource optimization approach for a network of multifunction radars. Linear and nonlinear models estimate the resource allocation for multifunction radar search and track functions. Interactions between radars occur over time-invariant balanced graphs that may be directed or undirected. The collective search area and target-assignment solution for coordinated radars is optimized by balancing resource usage across the radar network and minimizing total resource usage. Agreement on the global optimal target-assignment solution is ensured using a distributed binary consensus algorithm. Monte Carlo simulations validate the coordinated approach over uncoordinated alternatives.

  15. Retrieval of particle size distribution from aerosol optical thickness using an improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mao, Jiandong; Li, Jinxuan

    2015-10-01

    Particle size distribution is essential for describing direct and indirect radiation of aerosols. Because the relationship between the aerosol size distribution and optical thickness (AOT) is an ill-posed Fredholm integral equation of the first type, the traditional techniques for determining such size distributions, such as the Phillips-Twomey regularization method, are often ambiguous. Here, we use an approach based on an improved particle swarm optimization algorithm (IPSO) to retrieve aerosol size distribution. Using AOT data measured by a CE318 sun photometer in Yinchuan, we compared the aerosol size distributions retrieved using a simple genetic algorithm, a basic particle swarm optimization algorithm and the IPSO. Aerosol size distributions for different weather conditions were analyzed, including sunny, dusty and hazy conditions. Our results show that the IPSO-based inversion method retrieved aerosol size distributions under all weather conditions, showing great potential for similar size distribution inversions.

  16. Energy optimization of water distribution system

    SciTech Connect

    Not Available

    1993-02-01

    In order to analyze pump operating scenarios for the system with the computer model, information on existing pumping equipment and the distribution system was collected. The information includes the following: component description and design criteria for line booster stations, booster stations with reservoirs, and high lift pumps at the water treatment plants; daily operations data for 1988; annual reports from fiscal year 1987/1988 to fiscal year 1991/1992; and a 1985 calibrated KYPIPE computer model of DWSD`s water distribution system which included input data for the maximum hour and average day demands on the system for that year. This information has been used to produce the inventory database of the system and will be used to develop the computer program to analyze the system.

  17. Optimal calibration method for water distribution water quality model.

    PubMed

    Wu, Zheng Yi

    2006-01-01

    A water quality model is to predict water quality transport and fate throughout a water distribution system. The model is not only a promising alternative for analyzing disinfectant residuals in a cost-effective manner, but also a means of providing enormous engineering insights into the characteristics of water quality variation and constituent reactions. However, a water quality model is a reliable tool only if it predicts what a real system behaves. This paper presents a methodology that enables a modeler to efficiently calibrate a water quality model such that the field observed water quality values match with the model simulated values. The method is formulated to adjust the global water quality parameters and also the element-dependent water quality reaction rates for pipelines and tank storages. A genetic algorithm is applied to optimize the model parameters by minimizing the difference between the model-predicted values and the field-observed values. It is seamlessly integrated with a well-developed hydraulic and water quality modeling system. The approach has provided a generic tool and methodology for engineers to construct the sound water quality model in expedient manner. The method is applied to a real water system and demonstrated that a water quality model can be optimized for managing adequate water supply to public communities. PMID:16854809

  18. A two-stage sequential linear programming approach to IMRT dose optimization

    PubMed Central

    Zhang, Hao H; Meyer, Robert R; Wu, Jianzhou; Naqvi, Shahid A; Shi, Leyuan; D’Souza, Warren D

    2010-01-01

    The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte-Carlo based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically. PMID:20071764

  19. Electronic enclosure design using distributed particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Scriven, Ian; Lu, Junwei; Lewis, Andrew

    2013-02-01

    This article proposes a method for designing electromagnetic compatibility shielding enclosures using a peer-to-peer based distributed optimization system based on a modified particle swarm optimization algorithm. This optimization system is used to obtain optimal solutions to a shielding enclosure design problem efficiently with respect to both electromagnetic shielding efficiency and thermal performance. During the optimization procedure it becomes evident that optimization algorithms and computational models must be properly matched in order to achieve efficient operation. The proposed system is designed to be tolerant of faults and resource heterogeneity, and as such would find use in environments where large-scale computing resources are not available, such as smaller engineering companies, where it would allow computer-aided design by optimization using existing resources with little to no financial outlay.

  20. A flow path model for regional water distribution optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.

    2009-09-01

    We develop a flow path model for the optimization of a regional water distribution system. The model simultaneously describes a water distribution system in two parts: (1) the water delivery relationship between suppliers and receivers and (2) the physical water delivery network. In the first part, the model considers waters from different suppliers as multiple commodities. This helps the model clearly describe water deliveries by identifying the relationship between suppliers and receivers. The physical part characterizes a physical water distribution network by all possible flow paths. The flow path model can be used to optimize not only the suppliers to each receiver but also their associated flow paths for supplying water. This characteristic leads to the optimum solution that contains the optimal scheduling results and detailed information concerning water distribution in the physical system. That is, the water rights owner, water quantity, water location, and associated flow path of each delivery action are represented explicitly in the results rather than merely as an optimized total flow quantity in each arc of a distribution network. We first verify the proposed methodology on a hypothetical water distribution system. Then we apply the methodology to the water distribution system associated with the Tou-Qian River basin in northern Taiwan. The results show that the flow path model can be used to optimize the quantity of each water delivery, the associated flow path, and the water trade and transfer strategy.

  1. The Relationship between Distributed Leadership and Teachers' Academic Optimism

    ERIC Educational Resources Information Center

    Mascall, Blair; Leithwood, Kenneth; Straus, Tiiu; Sacks, Robin

    2008-01-01

    Purpose: The goal of this study was to examine the relationship between four patterns of distributed leadership and a modified version of a variable Hoy et al. have labeled "teachers' academic optimism." The distributed leadership patterns reflect the extent to which the performance of leadership functions is consciously aligned across the sources…

  2. Optimal Reward Functions in Distributed Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan

    2000-01-01

    We consider the design of multi-agent systems so as to optimize an overall world utility function when (1) those systems lack centralized communication and control, and (2) each agents runs a distinct Reinforcement Learning (RL) algorithm. A crucial issue in such design problems is to initialize/update each agent's private utility function, so as to induce best possible world utility. Traditional 'team game' solutions to this problem sidestep this issue and simply assign to each agent the world utility as its private utility function. In previous work we used the 'Collective Intelligence' framework to derive a better choice of private utility functions, one that results in world utility performance up to orders of magnitude superior to that ensuing from use of the team game utility. In this paper we extend these results. We derive the general class of private utility functions that both are easy for the individual agents to learn and that, if learned well, result in high world utility. We demonstrate experimentally that using these new utility functions can result in significantly improved performance over that of our previously proposed utility, over and above that previous utility's superiority to the conventional team game utility.

  3. Factorization and the synthesis of optimal feedback gains for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E.

    1990-01-01

    An approach based on Volterra factorization leads to a new methodology for the analysis and synthesis of the optimal feedback gain in the finite-time linear quadratic control problem for distributed parameter systems. The approach circumvents the need for solving and analyzing Riccati equations and provides a more transparent connection between the system dynamics and the optimal gain. The general results are further extended and specialized for the case where the underlying state is characterized by autonomous differential-delay dynamics. Numerical examples are given to illustrate the second-order convergence rate that is derived for an approximation scheme for the optimal feedback gain in the differential-delay problem.

  4. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  5. Pitfalls and optimal approaches to diagnose melioidosis.

    PubMed

    Kingsley, Paul Vijay; Arunkumar, Govindakarnavar; Tipre, Meghan; Leader, Mark; Sathiakumar, Nalini

    2016-06-01

    Melioidosis is a severe and fatal infectious disease in the tropics and subtropics. It presents as a febrile illness with protean manifestation ranging from chronic localized infection to acute fulminant septicemia with dissemination of infection to multiple organs characterized by abscesses. Pneumonia is the most common clinical presentation. Because of the wide range of clinical presentations, physicians may often misdiagnose and mistreat the disease for tuberculosis, pneumonia or other pyogenic infections. The purpose of this paper is to present common pitfalls in diagnosis and provide optimal approaches to enable early diagnosis and prompt treatment of melioidosis. Melioidosis may occur beyond the boundaries of endemic areas. There is no pathognomonic feature specific to a diagnosis of melioidosis. In endemic areas, physicians need to expand the diagnostic work-up to include melioidosis when confronted with clinical scenarios of pyrexia of unknown origin, progressive pneumonia or sepsis. Radiological imaging is an integral part of the diagnostic workup. Knowledge of the modes of transmission and risk factors will add support in clinically suspected cases to initiate therapy. In situations of clinically highly probable or possible cases where laboratory bacteriological confirmation is not possible, applying evidence-based criteria and empirical treatment with antimicrobials is recommended. It is of prime importance that patients undergo the full course of antimicrobial therapy to avoid relapse and recurrence. Early diagnosis and appropriate management is crucial in reducing serious complications leading to high mortality, and in preventing recurrences of the disease. Thus, there is a crucial need for promoting awareness among physicians at all levels and for improved diagnostic microbiology services. Further, the need for making the disease notifiable and/or initiating melioidosis registries in endemic countries appears to be compelling. PMID:27262061

  6. Optimization of Dose Distribution for the System of Linear Accelerator-Based Stereotactic Radiosurgery.

    NASA Astrophysics Data System (ADS)

    Suh, Tae-Suk

    The work suggested in this paper addresses a method for obtaining an optimal dose distribution for stereotactic radiosurgery. Since stereotactic radiosurgery utilizes multiple noncoplanar arcs and a three-dimensional dose evaluation technique, many beam parameters and complex optimization criteria are included in the dose optimization. Consequently, a lengthy computation time is required to optimize even the simplest case by a trial and error method. The basic approach presented here is to use both an analytical and an experimental optimization to minimize the dose to critical organs while maintaining a dose shaped to the target. The experimental approach is based on shaping the target volumes using multiple isocenters from dose experience, or on field shaping using a beam's eye view technique. The analytical approach is to adapt computer -aided design optimization to find optimum parameters automatically. Three-dimensional approximate dose models are developed to simulate the exact dose model using a spherical or cylindrical coordinate system. Optimum parameters are found much faster with the use of computer-aided design optimization techniques. The implementation of computer-aided design algorithms with the approximate dose model and the application of the algorithms to several cases are discussed. It is shown that the approximate dose model gives dose distributions similar to those of the exact dose model, which makes the approximate dose model an attractive alternative to the exact dose model, and much more efficient in terms of computer -aided design and visual optimization.

  7. Optical clock signal distribution and packaging optimization

    NASA Astrophysics Data System (ADS)

    Wu, Linghui

    Polymer-based waveguides for optoelectronic interconnects and packagings were fabricated by a fabrication process that is compatible with the Si CMOS packaging process. An optoelectronic interconnection layer (OIL) for the high-speed massive clock signal distribution for the Cray T-90 supercomputer board employing optical multimode channel waveguides in conjunction with surface-normal waveguide grating couplers and a 1-to-2 3 dB splitter was constructed. Equalized optical paths were realized using an optical H-tree structure having 48 optical fanouts. This device could be increased to 64 without introducing any additional complications. A 1-to-48 fanout H-tree structure using Ultradel 9000D series polyimide was fabricated. The propagation loss and splitting loss have been measured as 0.21 dB/cm and 0.4 dB/splitter at 850 nm. The power budget was discussed, and the H-tree waveguide fully satisfies the power budget requirement. A tapered waveguide coupler was employed to match the mode profile between the single-mode fiber and the multimode channel waveguides of the OIL. A thermo-optical based multimode switch was designed, fabricated, and tested. The finite difference method was used to simulate the thermal distribution in the polymer waveguide. Both stable and transient conditions have been calculated. The thermo-optical switch was fabricated and tested. The switching speed of 1 ms was experimentally confirmed, fitting well with the simulation results. Thermo-optic switching for randomly polarized light at wavelengths of 850 nm was experimental confirmed, as was a stable attenuation of 25 dB. The details of tapered waveguide fabrication were investigated. Compression-molded 3-D tapered waveguides were demonstrated for the first time. Not only the vertical depth variation but also the linear dimensions of the molded waveguides were well beyond the limits of what any other conventional waveguide fabrication method is capable of providing. Molded waveguides with

  8. A multiple objective optimization approach to aircraft control systems design

    NASA Technical Reports Server (NTRS)

    Tabak, D.; Schy, A. A.; Johnson, K. G.; Giesy, D. P.

    1979-01-01

    The design of an aircraft lateral control system, subject to several performance criteria and constraints, is considered. While in the previous studies of the same model a single criterion optimization, with other performance requirements expressed as constraints, has been pursued, the current approach involves a multiple criteria optimization. In particular, a Pareto optimal solution is sought.

  9. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  10. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  11. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  12. Computation and Optimization of Dose Distributions for Rotational Stereotactic Radiosurgery

    NASA Astrophysics Data System (ADS)

    Fox, Timothy Harold

    1994-01-01

    The stereotactic radiosurgery technique presented in this work is the patient rotator method which rotates the patient in a sitting position with a stereotactic head frame attached to the skull while collimated non-coplanar radiation beams from a 6 MV medical linear accelerator are delivered to the target point. The hypothesis of this dissertation is that accurate, three-dimensional dose distributions can be computed and optimized for the patient rotator method used in stereotactic radiosurgery. This dissertation presents research results in three areas related to computing and optimizing dose distributions for the patient rotator method. A three-dimensional dose model was developed to calculate the dose at any point in the cerebral cortex using a circular and adjustable collimator system and the geometry of the radiation beam with respect to the target point. The computed dose distributions compared to experimental measurements had an average maximum deviation of <0.7 mm for the relative isodose distributions greater than 50%. A system was developed to qualitatively and quantitatively visualize the computed dose distributions with patient anatomy. A registration method was presented for transforming each dataset to a common reference system. A method for computing the intersections of anatomical contour's boundaries was developed to calculate dose-volume information. The system efficiently and accurately reduced the large computed, volumetric sets of dose data, medical images, and anatomical contours to manageable images and graphs. A computer-aided optimization method was developed for rigorously selecting beam angles and weights for minimizing the dose to normal tissue. Linear programming was applied as the optimization method. The computed optimal beam angles and weights for a defined objective function and dose constraints exhibited a superior dose distribution compared to a standard plan. The developed dose model, qualitative and quantitative visualization

  13. Group Counseling Optimization: A Novel Approach

    NASA Astrophysics Data System (ADS)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  14. Multi-objective optimal dispatch of distributed energy resources

    NASA Astrophysics Data System (ADS)

    Longe, Ayomide

    This thesis is composed of two papers which investigate the optimal dispatch for distributed energy resources. In the first paper, an economic dispatch problem for a community microgrid is studied. In this microgrid, each agent pursues an economic dispatch for its personal resources. In addition, each agent is capable of trading electricity with other agents through a local energy market. In this paper, a simple market structure is introduced as a framework for energy trades in a small community microgrid such as the Solar Village. It was found that both sellers and buyers benefited by participating in this market. In the second paper, Semidefinite Programming (SDP) for convex relaxation of power flow equations is used for optimal active and reactive dispatch for Distributed Energy Resources (DER). Various objective functions including voltage regulation, reduced transmission line power losses, and minimized reactive power charges for a microgrid are introduced. Combinations of these goals are attained by solving a multiobjective optimization for the proposed ORPD problem. Also, both centralized and distributed versions of this optimal dispatch are investigated. It was found that SDP made the optimal dispatch faster and distributed solution allowed for scalability.

  15. Methodology for utilizing CD distributions for optimization of lithographic processes

    NASA Astrophysics Data System (ADS)

    Charrier, Edward W.; Mack, Chris A.; Zuo, Qiang; Maslow, Mark J.

    1997-07-01

    As the critical dimension (CD) of optical lithography processes continues to decrease, the process latitude also decreases and CD control becomes more difficult. As this trend continues, lithography engineers will find that they require improved process optimization methods which take into account the random and systematic errors that are inherent in any manufacturing process. This paper shows the methodology of such an optimization method. Lithography simulation and analysis software, combined with experimental process error distributions, are used to perform optimizations of numerical aperture and partial coherence, as well as the selection of the best OPC pattern for a given mask.

  16. Regularized Primal-Dual Subgradient Method for Distributed Constrained Optimization.

    PubMed

    Yuan, Deming; Ho, Daniel W C; Xu, Shengyuan

    2016-09-01

    In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method. We establish the convergence of the method by showing that it achieves an O ( K (-1/4) ) convergence rate for general distributed constrained optimization, where K is the iteration counter. Finally, a numerical example is provided to validate the convergence of the propose method. PMID:26285232

  17. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2016-06-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  18. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  19. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  20. Optimal cloning for finite distributions of coherent states

    SciTech Connect

    Cochrane, P.T.; Ralph, T.C.; Dolinska, A.

    2004-04-01

    We derive optimal cloning limits for finite Gaussian distributions of coherent states and describe techniques for achieving them. We discuss the relation of these limits to state estimation and the no-cloning limit in teleportation. A qualitatively different cloning limit is derived for a single-quadrature Gaussian quantum cloner.

  1. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  2. Simulation based flow distribution network optimization for vacuum assisted resin transfer moulding process

    NASA Astrophysics Data System (ADS)

    Hsiao, Kuang-Ting; Devillard, Mathieu; Advani, Suresh G.

    2004-05-01

    In the vacuum assisted resin transfer moulding (VARTM) process, using a flow distribution network such as flow channels and high permeability fabrics can accelerate the resin infiltration of the fibre reinforcement during the manufacture of composite parts. The flow distribution network significantly influences the fill time and fill pattern and is essential for the process design. The current practice has been to cover the top surface of the fibre preform with the distribution media with the hope that the resin will flood the top surface immediately and penetrate through the thickness. However, this approach has some drawbacks. One is when the resin finds its way to the vent before it has penetrated the preform entirely, which results in a defective part or resin wastage. Also, if the composite structure contains ribs or inserts, this approach invariably results in dry spots. Instead of this intuitive approach, we propose a science-based approach to design the layout of the distribution network. Our approach uses flow simulation of the resin into the network and the preform and a genetic algorithm to optimize the flow distribution network. An experimental case study of a co-cured rib structure is conducted to demonstrate the design procedure and validate the optimized flow distribution network design. Good agreement between the flow simulations and the experimental results was observed. It was found that the proposed design algorithm effectively optimized the flow distribution network of the part considered in our case study and hence should prove to be a useful tool to extend the VARTM process to manufacture of complex structures with effective use of the distribution network layup.

  3. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  4. System approach to distributed sensor management

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid

    2010-04-01

    Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.

  5. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. PMID:26827955

  6. Scalar and Multivariate Approaches for Optimal Network Design in Antarctica

    NASA Astrophysics Data System (ADS)

    Hryniw, Natalia

    Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.

  7. Applications of the theory of optimal control of distributed-parameter systems to structural optimization

    NASA Technical Reports Server (NTRS)

    Armand, J. P.

    1972-01-01

    An extension of classical methods of optimal control theory for systems described by ordinary differential equations to distributed-parameter systems described by partial differential equations is presented. An application is given involving the minimum-mass design of a simply-supported shear plate with a fixed fundamental frequency of vibration. An optimal plate thickness distribution in analytical form is found. The case of a minimum-mass design of an elastic sandwich plate whose fundamental frequency of free vibration is fixed. Under the most general conditions, the optimization problem reduces to the solution of two simultaneous partial differential equations involving the optimal thickness distribution and the modal displacement. One equation is the uniform energy distribution expression which was found by Ashley and McIntosh for the optimal design of one-dimensional structures with frequency constraints, and by Prager and Taylor for various design criteria in one and two dimensions. The second equation requires dynamic equilibrium at the preassigned vibration frequency.

  8. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  9. Optimization of an Aeroservoelastic Wing with Distributed Multiple Control Surfaces

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.

    2015-01-01

    This paper considers the aeroelastic optimization of a subsonic transport wingbox under a variety of static and dynamic aeroelastic constraints. Three types of design variables are utilized: structural variables (skin thickness, stiffener details), the quasi-steady deflection scheduling of a series of control surfaces distributed along the trailing edge for maneuver load alleviation and trim attainment, and the design details of an LQR controller, which commands oscillatory hinge moments into those same control surfaces. Optimization problems are solved where a closed loop flutter constraint is forced to satisfy the required flight margin, and mass reduction benefits are realized by relaxing the open loop flutter requirements.

  10. [Approaches to the optimization of medical services for the population].

    PubMed

    Babanov, S A

    2001-01-01

    Describes modern approaches to optimization of medical care of the population under conditions of finance deficiency. Expenditure cutting is evaluated from viewpoint of "proof" medicine (allotting finances for concrete patients and services). PMID:11515111

  11. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  12. Optimization approaches to volumetric modulated arc therapy planning.

    PubMed

    Unkelbach, Jan; Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-01

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed. PMID:25735291

  13. Optimization approaches to volumetric modulated arc therapy planning

    SciTech Connect

    Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  14. Reliability Optimization of Radial Distribution Systems Employing Differential Evolution and Bare Bones Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Kela, K. B.; Arya, L. D.

    2014-09-01

    This paper describes a methodology for determination of optimum failure rate and repair time for each section of a radial distribution system. An objective function in terms of reliability indices and their target values is selected. These indices depend mainly on failure rate and repair time of a section present in a distribution network. A cost is associated with the modification of failure rate and repair time. Hence the objective function is optimized subject to failure rate and repair time of each section of the distribution network considering the total budget allocated to achieve the task. The problem has been solved using differential evolution and bare bones particle swarm optimization. The algorithm has been implemented on a sample radial distribution system.

  15. Distribution function approach to redshift space distortions

    SciTech Connect

    Seljak, Uroš; McDonald, Patrick E-mail: pvmcdonald@lbl.gov

    2011-11-01

    We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansion of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter

  16. Universal scaling of optimal current distribution in transportation networks.

    PubMed

    Simini, Filippo; Rinaldo, Andrea; Maritan, Amos

    2009-04-01

    Transportation networks are inevitably selected with reference to their global cost which depends on the strengths and the distribution of the embedded currents. We prove that optimal current distributions for a uniformly injected d -dimensional network exhibit robust scale-invariance properties, independently of the particular cost function considered, as long as it is convex. We find that, in the limit of large currents, the distribution decays as a power law with an exponent equal to (2d-1)/(d-1). The current distribution can be exactly calculated in d=2 for all values of the current. Numerical simulations further suggest that the scaling properties remain unchanged for both random injections and by randomizing the convex cost functions. PMID:19518304

  17. Comparison of Two Spatial Optimization Techniques: A Framework to Solve Multiobjective Land Use Distribution Problems

    NASA Astrophysics Data System (ADS)

    Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon

    2009-02-01

    Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations ( Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions (“production of cereals,” “resistance to soil erosion by water,” and “landscape water retention”). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.

  18. Determination Method for Optimal Installation of Active Filters in Distribution Network with Distributed Generation

    NASA Astrophysics Data System (ADS)

    Kawasaki, Shoji; Hayashi, Yasuhiro; Matsuki, Junya; Kikuya, Hirotaka; Hojo, Masahide

    Recently, the harmonic troubles in a distribution network are worried in the background of the increase of the connection of distributed generation (DG) and the spread of the power electronics equipments. As one of the strategies, control the harmonic voltage by installing an active filter (AF) has been researched. In this paper, the authors propose a computation method to determine the optimal allocations, gains and installation number of AFs so as to minimize the maximum value of voltage total harmonic distortion (THD) for a distribution network with DGs. The developed method is based on particle swarm optimization (PSO) which is one of the nonlinear optimization methods. Especially, in this paper, the case where the harmonic voltage or the harmonic current in a distribution network is assumed by connecting many DGs through the inverters, and the authors propose a determination method of the optimal allocation and gain of AF that has the harmonic restrictive effect in the whole distribution network. Moreover, the authors propose also about a determination method of the necessary minimum installation number of AFs, by taking into consideration also about the case where the target value of harmonic suppression cannot be reached, by one set only of AF. In order to verify the validity and effectiveness of the proposed method, the numerical simulations are carried out by using an analytical model of distribution network with DGs.

  19. A hybrid simulation-optimization approach for solving the areal groundwater pollution source identification problems

    NASA Astrophysics Data System (ADS)

    Ayvaz, M. Tamer

    2016-07-01

    In this study, a new simulation-optimization approach is proposed for solving the areal groundwater pollution source identification problems which is an ill-posed inverse problem. In the simulation part of the proposed approach, groundwater flow and pollution transport processes are simulated by modeling the given aquifer system on MODFLOW and MT3DMS models. The developed simulation model is then integrated to a newly proposed hybrid optimization model where a binary genetic algorithm and a generalized reduced gradient method are mutually used. This is a novel approach and it is employed for the first time in the areal pollution source identification problems. The objective of the proposed hybrid optimization approach is to simultaneously identify the spatial distributions and input concentrations of the unknown areal groundwater pollution sources by using the limited number of pollution concentration time series at the monitoring well locations. The applicability of the proposed simulation-optimization approach is evaluated on a hypothetical aquifer model for different pollution source distributions. Furthermore, model performance is evaluated for measurement error conditions, different genetic algorithm parameter combinations, different numbers and locations of the monitoring wells, and different heterogeneous hydraulic conductivity fields. Identified results indicated that the proposed simulation-optimization approach may be an effective way to solve the areal groundwater pollution source identification problems.

  20. Approaches for Informing Optimal Dose of Behavioral Interventions

    PubMed Central

    King, Heather A.; Maciejewski, Matthew L.; Allen, Kelli D.; Yancy, William S.; Shaffer, Jonathan A.

    2015-01-01

    Background There is little guidance about to how select dose parameter values when designing behavioral interventions. Purpose The purpose of this study is to present approaches to inform intervention duration, frequency, and amount when (1) the investigator has no a priori expectation and is seeking a descriptive approach for identifying and narrowing the universe of dose values or (2) the investigator has an a priori expectation and is seeking validation of this expectation using an inferential approach. Methods Strengths and weaknesses of various approaches are described and illustrated with examples. Results Descriptive approaches include retrospective analysis of data from randomized trials, assessment of perceived optimal dose via prospective surveys or interviews of key stakeholders, and assessment of target patient behavior via prospective, longitudinal, observational studies. Inferential approaches include nonrandomized, early-phase trials and randomized designs. Conclusions By utilizing these approaches, researchers may more efficiently apply resources to identify the optimal values of dose parameters for behavioral interventions. PMID:24722964

  1. Optimality approaches to describe characteristic fluvial patterns on landscapes

    PubMed Central

    Paik, Kyungrock; Kumar, Praveen

    2010-01-01

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257

  2. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  3. Annular flow optimization: A new integrated approach

    SciTech Connect

    Maglione, R.; Robotti, G.; Romagnoli, R.

    1997-07-01

    During the drilling stage of an oil and gas well the hydraulic circuit of the mud assumes great importance with respect to most of the numerous and various constituting parts (mostly in the annular sections). Each of them has some points to be satisfied in order to guarantee both the safety of the operations and the performance optimization of each of the single elements of the circuit. The most important tasks for the annular part of the drilling hydraulic circuit are the following: (1) Maximum available pressure to the last casing shoe; (2) avoid borehole wall erosions; and (3) guarantee the hole cleaning. A new integrated system considering all the elements of the annular part of the drilling hydraulic circuit and the constraints imposed from each of them has been realized. In this way the family of the flow parameters (mud rheology and pump rate) satisfying simultaneously all the variables of the annular section has been found. Finally two examples regarding a standard and narrow annular section (slim hole) will be reported, showing briefly all the steps of the calculations until reaching the optimum flow parameters family (for that operational condition of drilling) that satisfies simultaneous all the flow parameters limitations imposed by the elements of the annular section circuit.

  4. Strategic approaches to optimizing peptide ADME properties.

    PubMed

    Di, Li

    2015-01-01

    Development of peptide drugs is challenging but also quite rewarding. Five blockbuster peptide drugs are currently on the market, and six new peptides received first marketing approval as new molecular entities in 2012. Although peptides only represent 2% of the drug market, the market is growing twice as quickly and might soon occupy a larger niche. Natural peptides typically have poor absorption, distribution, metabolism, and excretion (ADME) properties with rapid clearance, short half-life, low permeability, and sometimes low solubility. Strategies have been developed to improve peptide drugability through enhancing permeability, reducing proteolysis and renal clearance, and prolonging half-life. In vivo, in vitro, and in silico tools are available to evaluate ADME properties of peptides, and structural modification strategies are in place to improve peptide developability. PMID:25366889

  5. Comparative Properties of Collaborative Optimization and Other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  6. Comparative Properties of Collaborative Optimization and other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  7. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    PubMed

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions. PMID:24705545

  8. Structural approaches to spin glasses and optimization problems

    NASA Astrophysics Data System (ADS)

    de Sanctis, Luca

    We introduce the concept of Random Multi-Overlap Structure (RaMOSt) as a generalization of the one introduced by M. Aizenman et al. for non-diluted spin glasses. We use this concept to find generalized bounds for the free energy of the Viana-Bray model of diluted spin glasses and to formulate and prove the Extended Variational Principle that implicitly provides the free energy of the model. Then we exhibit a theorem for the limiting RaMOSt, analogous to the one found by F. Guerra for the Sherrington-Kirkpatrick model, that describes some stability properties of the model. We also show how our technique can be used to prove the existence of the thermodynamic limit of the free energy. We then propose an ultrametric breaking of replica symmetry for diluted spin glasses in the framework of Random Multi-Overlap Structures (RaMOSt). Such a proposal is closer to the Parisi theory for non-diluted spin glasses than the theory based on the iterative approach. Our approach allows to formulate an ansatz in which the Broken Replica Symmetry trial function depends on a set of numbers, over which one has to take the infimum (as opposed to a nested chain of probabilty distributions). Our scheme suggests that the order parameter is determined by the probability distribution of the multi-overlap in a similar sense as in the non-diluted case, and it is not necessarily a functional. Such results are then extended to the K-SAT and p-XOR-SAT optimization problems, and to the spherical mean field spin glass. The ultrametric structure exhibits a factorization property similar to the one of the optimal structures for the Viana-Bray model. The present work paves the way to a revisited Parisi theory for diluted spin systems. Moreover, it emphasizes some structural analogies among different models, which also seem to be plausible for models that still escape good mathematical control. This structural analysis seems quite promising both mathematically and physically.

  9. An approach to structure/control simultaneous optimization for large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Onoda, Junjiro; Haftka, Raphael T.

    1987-01-01

    This paper presents an approach to the simultaneous optimal design of a structure and control system for large flexible spacecrafts based on realistic objective function and constraints. The weight or total cost of structure and control system is minimized subject to constraints on the magnitude of response to a given disturbance involving both rigid-body and elastic modes. A nested optimization technique is developed to solve the combined problem. As an example, simple beam-like spacecraft under a steady-state white-noise disturbance force is investigated and some results of optimization are presented. In the numerical examples, the stiffness distribution, location of controller, and control gains are optimized. Direct feedback control and linear quadratic optimal controls laws are used with both inertial and noninertial disturbing force. It is shown that the total cost is sensitive to the overall structural stiffness, so that simultaneous optimization of the structure and control system is indeed useful.

  10. Optimal Voltage Regulation for Unbalanced Distribution Networks Considering Distributed Energy Resources

    SciTech Connect

    Liu, Guodong; Ceylan, Oguzhan; Xu, Yan; Tomsovic, Kevin

    2015-01-01

    With increasing penetration of distributed generation in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative quadratic constrained quadratic programming model to minimize voltage deviations and maximize distributed energy resource (DER) active power output in a three phase unbalanced distribution system is developed. The optimization model is based on the linearized sensitivity coefficients between controlled variables (e.g., node voltages) and control variables (e.g., real and reactive power injections of DERs). To avoid the oscillation of solution when it is close to the optimum, a golden search method is introduced to control the step size. Numerical simulations on modified IEEE 13 nodes test feeders show the efficiency of the proposed model. Compared to the results solved by heuristic search (harmony algorithm), the proposed model converges quickly to the global optimum.

  11. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  12. A Communication-Optimal Framework for Contracting Distributed Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-11-16

    Tensor contractions are extremely compute intensive generalized matrix multiplication operations encountered in many computational science fields, such as quantum chemistry and nuclear physics. Unlike distributed matrix multiplication, which has been extensively studied, limited work has been done in understanding distributed tensor contractions. In this paper, we characterize distributed tensor contraction algorithms on torus networks. We develop a framework with three fundamental communication operators to generate communication-efficient contraction algorithms for arbitrary tensor contractions. We show that for a given amount of memory per processor, our framework is communication optimal for all tensor contractions. We demonstrate performance and scalability of our framework on up to 262,144 cores of BG/Q supercomputer using five tensor contraction examples.

  13. Optimization of the light distribution of luminaries for tunnel and street lighting

    NASA Astrophysics Data System (ADS)

    Pachamanov, Angel; Pachamanova, Dessislava

    2008-01-01

    An optimization approach is discussed for the problem of designing light distributions for luminaries for tunnel and street lighting which satisfy luminance-based and glare-based requirements set by the International Commision on Illumination (CIE) and the European Committee for Standardization (CEN) while consuming minimal power. The problem is formulated as a linear optimization problem that incorporates the geometrical parameters of the lighting installation and the reflective properties of the road surface. A polynomial representation for the light intensities is used in order to construct smooth light distribution curves, so that the luminaries can be manufactured with existing technology. Computational experiments indicate that optimization models can substantially improve the lighting parameters of luminaries, and make lighting installations more energy-efficient.

  14. Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-03-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method) with a multiobjective optimization (MOO) approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO) with the traditional Nelder-Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1) the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2) both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash-Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3) evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4) compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization

  15. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-01

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717

  16. Principled negotiation and distributed optimization for advanced air traffic management

    NASA Astrophysics Data System (ADS)

    Wangermann, John Paul

    Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring

  17. Optimal design of light distribution of LED luminaries for road lighting

    NASA Astrophysics Data System (ADS)

    Lai, Wei; Chen, Weimin; Liu, Xianming; Lei, Xiaohua

    2011-10-01

    Conventional road lighting luminaries are gradually upgraded by LED luminaries nowadays. It is an urgent problem to design the light distribution of LED luminaries fixed at the former luminaries arrangement position, that are both energysaving and capable of meeting the lighting requirements made by the International Commission on Illumination (CIE). In this paper, a nonlinear optimization approach is proposed for light distribution design of LED road lighting luminaries, in which the average road surface luminance, overall road surface luminance uniformity, longitudinal road surface luminance uniformity, glare and surround ratio specified by CIE are set as constraint conditions to minimize the total luminous flux. The nonlinear problem can be transformed to a linear problem by doing rational equivalent transformation on constraint conditions. A polynomial of cosine function for the illumination distribution on the road is used to make the problem solvable and construct smooth light distribution curves. Taking the C2 class road with five different lighting classes M1 to M5 defined by CIE for example, the most energy-saving light distributions are obtained with the above method. Compared with a sample luminary produced by linear optimization method, the LED luminary with theoretically optimal lighting distribution in the paper can save 40% of the energy at the least.

  18. A simple approach for predicting time-optimal slew capability

    NASA Astrophysics Data System (ADS)

    King, Jeffery T.; Karpenko, Mark

    2016-03-01

    The productivity of space-based imaging satellite sensors to collect images is directly related to the agility of the spacecraft. Increasing the satellite agility, without changing the attitude control hardware, can be accomplished by using optimal control to design shortest-time maneuvers. The performance improvement that can be obtained using optimal control is tied to the specific configuration of the satellite, e.g. mass properties and reaction wheel array geometry. Therefore, it is generally difficult to predict performance without an extensive simulation study. This paper presents a simple idea for estimating the agility enhancement that can be obtained using optimal control without the need to solve any optimal control problems. The approach is based on the concept of the agility envelope, which expresses the capability of a spacecraft in terms of a three-dimensional agility volume. Validation of this new approach is conducted using both simulation and on-orbit data.

  19. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  20. Departures from optimality when pursuing multiple approach or avoidance goals.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Neal, Andrew; Farrell, Simon

    2016-07-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. (PsycINFO Database Record PMID:26963081

  1. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  2. Combining Bayesian classifiers and estimation of distribution algorithms for optimization in continuous domains

    NASA Astrophysics Data System (ADS)

    Miquelez, Teresa; Bengoetxea, Endika; Mendiburu, Alexander; Larranaga, Pedro

    2007-12-01

    This paper introduces a evolutionary computation method that applies Bayesian classifiers to optimization problems. This approach is based on Estimation of Distribution Algorithms (EDAs) in which Bayesian or Gaussian networks are applied to the evolution of a population of individuals (i.e. potential solutions to the optimization problem) in order to improve the quality of the individuals of the next generation. Our new approach, called Evolutionary Bayesian Classifier-based Optimization Algorithm (EBCOA), employs Bayesian classifiers instead of Bayesian or Gaussian networks in order to evolve individuals to a fitter population. In brief, EBCOAs are characterized by applying Bayesian classification techniques - usually applied to supervised classification problems - to optimization in continuous domains. We propose and review in this paper different Bayesian classifiers for implementing our EBCOA method, focusing particularly on EBCOAs applying naive Bayes, semi-na¨ive Bayes, and tree augmented na¨ive Bayes classifiers. This work presents a deep study on the behavior of these algorithms with classical optimiztion problems in continuous domains. The different parameters used for tuning the performance of the algorithms are discussed, and a comprehensive overview of their influence is provided. We also present experimental results to compare this new method with other state of the art approaches of the evolutionary computation field for continuous domains such as Evolutionary Strategies (ES) and Estimation of Distribution Algorithms (EDAs).

  3. Optimization of convective fin systems: a holistic approach

    NASA Astrophysics Data System (ADS)

    Sasikumar, M.; Balaji, C.

    A numerical analysis of natural convection heat transfer and entropy generation from an array of vertical fins, standing on a horizontal duct, with turbulent fluid flow inside, has been carried out. The analysis takes into account the variation of base temperature along the duct, traditionally ignored by most studies on such problems. One-dimensional fin equation is solved using a second order finite difference scheme for each of the fins in the system and this, in conjunction with the use of turbulent flow correlations for duct, is used to obtain the temperature distribution along the duct. The influence of the geometric and thermal parameters, which are normally employed in the design of a thermal system, has been studied. Correlations are developed for (i) the total heat transfer rate per unit mass of the fin system (ii) total entropy generation rate and (iii) fin height, as a function of the geometric parameters of the fin system. Optimal dimensions of the fin system for (i) maximum heat transfer rate per unit mass and (ii) minimum total entropy generation rate are obtained using Genetic Algorithm. As expected, these optima do not match. An approach to a `holistic' design that takes into account both these criteria has also been presented.

  4. Metabolic Adaptation Processes That Converge to Optimal Biomass Flux Distributions

    PubMed Central

    Altafini, Claudio; Facchetti, Giuseppe

    2015-01-01

    In simple organisms like E.coli, the metabolic response to an external perturbation passes through a transient phase in which the activation of a number of latent pathways can guarantee survival at the expenses of growth. Growth is gradually recovered as the organism adapts to the new condition. This adaptation can be modeled as a process of repeated metabolic adjustments obtained through the resilencings of the non-essential metabolic reactions, using growth rate as selection probability for the phenotypes obtained. The resulting metabolic adaptation process tends naturally to steer the metabolic fluxes towards high growth phenotypes. Quite remarkably, when applied to the central carbon metabolism of E.coli, it follows that nearly all flux distributions converge to the flux vector representing optimal growth, i.e., the solution of the biomass optimization problem turns out to be the dominant attractor of the metabolic adaptation process. PMID:26340476

  5. Optimal purchasing of raw materials: A data-driven approach

    SciTech Connect

    Muteki, K.; MacGregor, J.F.

    2008-06-15

    An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.

  6. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  7. Mathematical optimization of matter distribution for a planetary system configuration

    NASA Astrophysics Data System (ADS)

    Morozov, Yegor; Bukhtoyarov, Mikhail

    2016-07-01

    Planetary formation is mostly a random process. When the humanity reaches the point when it can transform planetary systems for the purpose of interstellar life expansion, the optimal distribution of matter in a planetary system will determine its population and expansive potential. Maximization of the planetary system carrying capacity and its potential for the interstellar life expansion depends on planetary sizes, orbits, rotation, chemical composition and other vital parameters. The distribution of planetesimals to achieve maximal carrying capacity of the planets during their life cycle, and maximal potential to inhabit other planetary systems must be calculated comprehensively. Moving much material from one planetary system to another is uneconomic because of the high amounts of energy and time required. Terraforming of the particular planets before the whole planetary system is configured might drastically decrease the potential habitability the whole system. Thus a planetary system is the basic unit for calculations to sustain maximal overall population and expand further. The mathematical model of optimization of matter distribution for a planetary system configuration includes the input observed parameters: the map of material orbiting in the planetary system with specified orbits, masses, sizes, and the chemical compound for each, and the optimized output parameters. The optimized output parameters are sizes, masses, the number of planets, their chemical compound, and masses of the satellites required to make tidal forces. Also the magnetic fields and planetary rotations are crucial, but they will be considered in further versions of this model. The optimization criteria is the maximal carrying capacity plus maximal expansive potential of the planetary system. The maximal carrying capacity means the availability of essential life ingredients on the planetary surface, and the maximal expansive potential means availability of uranium and metals to build

  8. Optimal multi-stage planning of power distribution systems

    SciTech Connect

    Gonen, T.; Ramirez-Rosado, I.J.

    1987-04-01

    This paper presents a completely-dynamic mixed-integer model to solve the optimal sizing, timing, and location of distribution substation and feeder expansion problems simultaneously. The objective function of the model represents the present worth of costs of investment, energy, and demand losses of the system which takes place throughout the planning time horizon. It is minimized subject to the Kirchhoff's current law, power capacity limits, and logical constraints by using a standard mathematical programming system. The developed model allows to include the explicit constraints of radiality and voltage drop in its formulation.

  9. The reproductive value in distributed optimal control models.

    PubMed

    Wrzaczek, Stefan; Kuhn, Michael; Prskawetz, Alexia; Feichtinger, Gustav

    2010-05-01

    We show that in a large class of distributed optimal control models (DOCM), where population is described by a McKendrick type equation with an endogenous number of newborns, the reproductive value of Fisher shows up as part of the shadow price of the population. Depending on the objective function, the reproductive value may be negative. Moreover, we show results of the reproductive value for changing vital rates. To motivate and demonstrate the general framework, we provide examples in health economics, epidemiology, and population biology. PMID:20096297

  10. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  11. Universal Approach to Optimal Photon Storage in Atomic Media

    SciTech Connect

    Gorshkov, Alexey V.; Andre, Axel; Lukin, Mikhail D.; Fleischhauer, Michael; Soerensen, Anders S.

    2007-03-23

    We present a universal physical picture for describing storage and retrieval of photon wave packets in a {lambda}-type atomic medium. This physical picture encompasses a variety of different approaches to pulse storage ranging from adiabatic reduction of the photon group velocity and pulse-propagation control via off-resonant Raman fields to photon-echo-based techniques. Furthermore, we derive an optimal control strategy for storage and retrieval of a photon wave packet of any given shape. All these approaches, when optimized, yield identical maximum efficiencies, which only depend on the optical depth of the medium.

  12. Optimal Mass Distribution Prediction for Human Proximal Femur with Bi-modulus Property.

    PubMed

    Shi, Jiao; Cai, Kun; Qin, Qing H

    2014-12-01

    Simulation of the mass distribution in a human proximal femur is important to provide a reasonable therapy scheme for a patient with osteoporosis. An algorithm is developed for prediction of optimal mass distribution in a human proximal femur under a given loading environment. In this algorithm, the bone material is assumed to be bi-modulus, i.e., the tension modulus is not identical to the compression modulus in the same direction. With this bi-modulus bone material, a topology optimization method, i.e., modified SIMP approach, is employed to determine the optimal mass distribution in a proximal femur. The effects of the difference between two moduli on the final material distribution are numerically investigated. Numerical results obtained show that the mass distribution in bi-modular bone materials is different from that in traditional isotropic material. As the tension modulus is less than the compression modulus for bone tissues, the amount of mass required to support tension loads is greater than that required by isotropic material for the same daily activities including one-leg stance, abduction and adduction. PMID:26336694

  13. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  14. Optimal pattern distributions in Rete-based production systems

    NASA Technical Reports Server (NTRS)

    Scott, Stephen L.

    1994-01-01

    Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.

  15. Optimal Flight for Ground Noise Reduction in Helicopter Landing Approach: Optimal Altitude and Velocity Control

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi; Ishii, Hirokazu; Uchida, Junichi; Gomi, Hiromi; Matayoshi, Naoki; Okuno, Yoshinori

    This study aims to obtain the optimal flights of a helicopter that reduce ground noise during landing approach with an optimization technique, and to conduct flight tests for confirming the effectiveness of the optimal solutions. Past experiments of Japan Aerospace Exploration Agency (JAXA) show that the noise of a helicopter varies significantly according to its flight conditions, especially depending on the flight path angle. We therefore build a simple noise model for a helicopter, in which the level of the noise generated from a point sound source is a function only of the flight path angle. Using equations of motion for flight in a vertical plane, we define optimal control problems for minimizing noise levels measured at points on the ground surface, and obtain optimal controls for specified initial altitudes, flight constraints, and wind conditions. The obtained optimal flights avoid the flight path angle which generates large noise and decrease the flight time, which are different from conventional flight. Finally, we verify the validity of the optimal flight patterns through flight experiments. The actual flights following the optimal paths resulted in noise reduction, which shows the effectiveness of the optimization.

  16. The optimality of potential rescaling approaches in land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    It is well-known that systematic differences exist between modeled and observed realizations of hydrological variables like soil moisture. Prior to data assimilation, these differences must be removed in order to obtain an optimal analysis. A number of rescaling approaches have been proposed for rem...

  17. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  18. Successive linear optimization approach to the dynamic traffic assignment problem

    SciTech Connect

    Ho, J.K.

    1980-11-01

    A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.

  19. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  20. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    SciTech Connect

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-15

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  1. An optimal torque distribution control strategy for four-independent wheel drive electric vehicles

    NASA Astrophysics Data System (ADS)

    Li, Bin; Goodarzi, Avesta; Khajepour, Amir; Chen, Shih-ken; Litkouhi, Baktiar

    2015-08-01

    In this paper, an optimal torque distribution approach is proposed for electric vehicle equipped with four independent wheel motors to improve vehicle handling and stability performance. A novel objective function is formulated which works in a multifunctional way by considering the interference among different performance indices: forces and moment errors at the centre of gravity of the vehicle, actuator control efforts and tyre workload usage. To adapt different driving conditions, a weighting factors tuning scheme is designed to adjust the relative weight of each performance in the objective function. The effectiveness of the proposed optimal torque distribution is evaluated by simulations with CarSim and Matlab/Simulink. The simulation results under different driving scenarios indicate that the proposed control strategy can effectively improve the vehicle handling and stability even in slippery road conditions.

  2. AI approach to optimal var control with fuzzy reactive loads

    SciTech Connect

    Abdul-Rahman, K.H.; Shahidehpour, S.M.; Daneshdoost, M.

    1995-02-01

    This paper presents an artificial intelligence (AI) approach to the optimal reactive power (var) control problem. The method incorporates the reactive load uncertainty in optimizing the overall system performance. The artificial neural network (ANN) enhanced by fuzzy sets is used to determine the memberships of control variables corresponding to the given load values. A power flow solution will determine the corresponding state of the system. Since the resulting system state may not be feasible in real-time, a heuristic method based on the application of sensitivities in expert system is employed to refine the solution with minimum adjustments of control variables. Test cases and numerical results demonstrate the applicability of the proposed approach. Simplicity, processing speed and ability to model load uncertainties make this approach a viable option for on-line var control.

  3. Analytic characterization of linear accelerator radiosurgery dose distributions for fast optimization

    NASA Astrophysics Data System (ADS)

    Meeks, Sanford L.; Bova, Frank J.; Buatti, John M.; Friedman, William A.; Eyster, Brian; Kendrick, Lance A.

    1999-11-01

    Linear accelerator (linac) radiosurgery utilizes non-coplanar arc therapy delivered through circular collimators. Generally, spherically symmetric arc sets are used, resulting in nominally spherical dose distributions. Various treatment planning parameters may be manipulated to provide dose conformation to irregular lesions. Iterative manipulation of these variables can be a difficult and time-consuming task, because (a) understanding the effect of these parameters is complicated and (b) three-dimensional (3D) dose calculations are computationally expensive. This manipulation can be simplified, however, because the prescription isodose surface for all single isocentre distributions can be approximated by conic sections. In this study, the effects of treatment planning parameter manipulation on the dimensions of the treatment isodose surface were determined empirically. These dimensions were then fitted to analytic functions, assuming that the dose distributions were characterized as conic sections. These analytic functions allowed real-time approximation of the 3D isodose surface. Iterative plan optimization, either manual or automated, is achieved more efficiently using this real time approximation of the dose matrix. Subsequent to iterative plan optimization, the analytic function is related back to the appropriate plan parameters, and the dose distribution is determined using conventional dosimetry calculations. This provides a pseudo-inverse approach to radiosurgery optimization, based solely on geometric considerations.

  4. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  5. Effects of optimism on creativity under approach and avoidance motivation

    PubMed Central

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690

  6. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  7. Correction of linear-array lidar intensity data using an optimal beam shaping approach

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang

    2016-08-01

    The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.

  8. Shape Optimization and Supremal Minimization Approaches in Landslides Modeling

    SciTech Connect

    Hassani, Riad Ionescu, Ioan R. Lachand-Robert, Thomas

    2005-10-15

    The steady-state unidirectional (anti-plane) flow for a Bingham fluid is considered. We take into account the inhomogeneous yield limit of the fluid, which is well adjusted to the description of landslides. The blocking property is analyzed and we introduce the safety factor which is connected to two optimization problems in terms of velocities and stresses. Concerning the velocity analysis the minimum problem in Bv({omega}) is equivalent to a shape-optimization problem. The optimal set is the part of the land which slides whenever the loading parameter becomes greater than the safety factor. This is proved in the one-dimensional case and conjectured for the two-dimensional flow. For the stress-optimization problem we give a stream function formulation in order to deduce a minimum problem in W{sup 1,{infinity}}({omega}) and we prove the existence of a minimizer. The L{sup p}({omega}) approximation technique is used to get a sequence of minimum problems for smooth functionals. We propose two numerical approaches following the two analysis presented before.First, we describe a numerical method to compute the safety factor through equivalence with the shape-optimization problem.Then the finite-element approach and a Newton method is used to obtain a numerical scheme for the stress formulation. Some numerical results are given in order to compare the two methods. The shape-optimization method is sharp in detecting the sliding zones but the convergence is very sensitive to the choice of the parameters. The stress-optimization method is more robust, gives precise safety factors but the results cannot be easily compiled to obtain the sliding zone.

  9. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  10. Multiobjective genetic approach for optimal control of photoinduced processes

    SciTech Connect

    Bonacina, Luigi; Extermann, Jerome; Rondi, Ariana; Wolf, Jean-Pierre; Boutou, Veronique

    2007-08-15

    We have applied a multiobjective genetic algorithm to the optimization of multiphoton-excited fluorescence. Our study shows the advantages that this approach can offer to experiments based on adaptive shaping of femtosecond pulses. The algorithm outperforms single-objective optimizations, being totally independent from the bias of user defined parameters and giving simultaneous access to a large set of feasible solutions. The global inspection of their ensemble represents a powerful support to unravel the connections between pulse spectral field features and excitation dynamics of the sample.

  11. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  12. Sequential activation of metabolic pathways: a dynamic optimization approach.

    PubMed

    Oyarzún, Diego A; Ingalls, Brian P; Middleton, Richard H; Kalamatianos, Dimitrios

    2009-11-01

    The regulation of cellular metabolism facilitates robust cellular operation in the face of changing external conditions. The cellular response to this varying environment may include the activation or inactivation of appropriate metabolic pathways. Experimental and numerical observations of sequential timing in pathway activation have been reported in the literature. It has been argued that such patterns can be rationalized by means of an underlying optimal metabolic design. In this paper we pose a dynamic optimization problem that accounts for time-resource minimization in pathway activation under constrained total enzyme abundance. The optimized variables are time-dependent enzyme concentrations that drive the pathway to a steady state characterized by a prescribed metabolic flux. The problem formulation addresses unbranched pathways with irreversible kinetics. Neither specific reaction kinetics nor fixed pathway length are assumed.In the optimal solution, each enzyme follows a switching profile between zero and maximum concentration, following a temporal sequence that matches the pathway topology. This result provides an analytic justification of the sequential activation previously described in the literature. In contrast with the existent numerical approaches, the activation sequence is proven to be optimal for a generic class of monomolecular kinetics. This class includes, but is not limited to, Mass Action, Michaelis-Menten, Hill, and some Power-law models. This suggests that sequential enzyme expression may be a common feature of metabolic regulation, as it is a robust property of optimal pathway activation. PMID:19412635

  13. New Results in the Quantum Statistical Approach to Parton Distributions

    NASA Astrophysics Data System (ADS)

    Soffer, Jacques; Bourrely, Claude; Buccella, Franco

    2015-02-01

    We will describe the quantum statistical approach to parton distributions allowing to obtain simultaneously the unpolarized distributions and the helicity distributions. We will present some recent results, in particular related to the nucleon spin structure in QCD. Future measurements are challenging to check the validity of this novel physical framework.

  14. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces. PMID:27625978

  15. Optimal Control of Distributed Energy Resources using Model Predictive Control

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen

    2012-07-22

    In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.

  16. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  17. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  18. Optimal Solar PV Arrays Integration for Distributed Generation

    SciTech Connect

    Omitaomu, Olufemi A; Li, Xueping

    2012-01-01

    Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introduce quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.

  19. A common distributed language approach to software integration

    NASA Technical Reports Server (NTRS)

    Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.

    1989-01-01

    An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.

  20. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  1. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  2. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  3. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.

    PubMed

    Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A

    2013-02-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  4. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  5. A statistical approach for polarized parton distributions

    NASA Astrophysics Data System (ADS)

    Bourrely, C.; Soffer, J.; Buccella, F.

    2002-04-01

    A global next-to-leading order QCD analysis of unpolarized and polarized deep-inelastic scattering data is performed with parton distributions constructed in a statistical physical picture of the nucleon. The chiral properties of QCD lead to strong relations between quarks and antiquarks distributions and the importance of the Pauli exclusion principle is also emphasized. We obtain a good description, in a broad range of x and Q^2, of all measured structure functions in terms of very few free parameters. We stress the fact that at RHIC-BNL the ratio of the unpolarized cross sections for the production of W^+ and W^- in pp collisions will directly probe the behavior of the bar d(x) / bar u(x) ratio for x ≥ 0.2, a definite and important test for the statistical model. Finally, we give specific predictions for various helicity asymmetries for the W^±, Z production in pp collisions at high energies, which will be measured with forthcoming experiments at RHIC-BNL and which are sensitive tests of the statistical model for Δ bar u(x) and Δ bar d(x).

  6. RFI mitigation for SMOS: a distributed approach

    NASA Astrophysics Data System (ADS)

    Soldo, Y.; Khazaal, A.; Cabot, F.; Anterrieu, E.

    2012-04-01

    The Soil Moisture and Ocean Salinity (SMOS) satellite was launched by ESA on November 2nd, 2009. Its payload MIRAS (Microwave Imaging Radiometer by Aperture Synthesis), is a two-dimensional L-band interferometric radiometer, and measures brightness temperatures (BT) in the protected 1400-1427 MHz band. Although this band was preserved for passive measurements, numerous radio frequency interferences (RFIs) are clearly visible in SMOS' data. Three main foci of interest are detection, geo-localization and mitigation of the RFI sources. In this contribution is presented a method that addresses detection and mitigation in a snapshot-wise sense using the L1A SMOS products and the hexagonal 256x256 grid. Localization of the sources can also be inferred. Previous studies have already pointed out the large extent of RFIs impact on SMOS snapshots. Most of the RFI signal's energy is found around the source and its aliases, but it affects all points of the reconstructed BT scene. In principle it is known how a point source influences all grid points, so one way of mitigating RFIs is to obtain the precise localization of the source and have a snapshot-wise estimation of the source's temperature. But particularly tricky configurations may appear. For example the BT distribution pattern of a RFI may not match that of a point source or multiple RFIs can be so close to each other to be hard to process independently. This algorithm defines clusters around the points with highest BT, then within this cluster, it simulates an RFI source in a distributed sense, i.e. it simulates RFIs in various points inside the cluster, in order to obtain a BT distribution that is as close as possible to the distribution pattern in the measured data. This is done knowing that a source in a grid point will affect all other grid points to a certain known amount, which depends on the G-matrix and the aposization window, and the final BT distribution we want to obtain. Also, thanks to the use of detailed

  7. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  8. Optimized probabilistic quantum processors: A unified geometric approach 1

    NASA Astrophysics Data System (ADS)

    Bergou, Janos; Bagan, Emilio; Feldman, Edgar

    Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry

  9. Activity-Centric Approach to Distributed Programming

    NASA Technical Reports Server (NTRS)

    Levy, Renato; Satapathy, Goutam; Lang, Jun

    2004-01-01

    The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.

  10. Particle Swarm and Ant Colony Approaches in Multiobjective Optimization

    NASA Astrophysics Data System (ADS)

    Rao, S. S.

    2010-10-01

    The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.

  11. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  12. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  13. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  14. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    PubMed Central

    Jevtić, Aleksandar; Gutiérrez, Álvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce. PMID:22346677

  15. Distributed bees algorithm parameters optimization for a cost efficient target allocation in swarms of robots.

    PubMed

    Jevtić, Aleksandar; Gutiérrez, Alvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the distributed bees algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA's control parameters by means of a genetic algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots' distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce. PMID:22346677

  16. Optimal Service Distribution in WSN Service System Subject to Data Security Constraints

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-01-01

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346

  17. Optimal service distribution in WSN service system subject to data security constraints.

    PubMed

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-01-01

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346

  18. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375

  19. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  20. Conductance Distributions for Empirical Orthogonal Function Analysis and Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Knipp, Delores; McGranaghan, Ryan; Matsuo, Tomoko

    2016-04-01

    We show the first characterizations of the primary modes of ionospheric Hall and Pedersen conductance variability as empirical orthogonal functions (EOFs). These are derived from six satellite years of Defense Meteorological Satellite Program (DMSP) particle data acquired during the rise of solar cycles 22 and 24. The 60 million DMSP spectra were each processed through the Global Airlglow Model. This is the first large-scale analysis of ionospheric conductances completely free of assumption of the incident electron energy spectra. We show that the mean patterns and first four EOFs capture ˜50.1 and 52.9% of the total Pedersen and Hall conductance variabilities, respectively. The mean patterns and first EOFs are consistent with typical diffuse auroral oval structures and quiet time strengthening/weakening of the mean pattern. The second and third EOFs show major disturbance features of magnetosphere-ionosphere (MI) interactions: geomagnetically induced auroral zone expansion in EOF2 and the auroral substorm current wedge in EOF3. The fourth EOFs suggest diminished conductance associated with ionospheric substorm recovery mode. These EOFs are then used in a new optimal interpolation (OI) technique to estimate complete high-latitude ionospheric conductance distributions. The technique combines particle precipitation-based calculations of ionospheric conductances and their errors with a background model and its error covariance (estimated by EOF analysis) to infer complete distributions of the high-latitude ionospheric conductances for a week in late 2011. The OI technique captures: 1) smaller-scaler ionospheric conductance features associated with discrete precipitation and 2) brings ground- and space-based data into closer agreement. We show quantitatively and qualitatively that this new technique provides better ionospheric conductance specification than past statistical models, especially during heightened geomagnetic activity.

  1. Simultaneous optimization of dose distributions and fractionation schemes in particle radiotherapy

    SciTech Connect

    Unkelbach, Jan; Zeng, Chuan; Engelsman, Martijn

    2013-09-15

    Purpose: The paper considers the fractionation problem in intensity modulated proton therapy (IMPT). Conventionally, IMPT fields are optimized independently of the fractionation scheme. In this work, we discuss the simultaneous optimization of fractionation scheme and pencil beam intensities.Methods: This is performed by allowing for distinct pencil beam intensities in each fraction, which are optimized using objective and constraint functions based on biologically equivalent dose (BED). The paper presents a model that mimics an IMPT treatment with a single incident beam direction for which the optimal fractionation scheme can be determined despite the nonconvexity of the BED-based treatment planning problem.Results: For this model, it is shown that a small α/β ratio in the tumor gives rise to a hypofractionated treatment, whereas a large α/β ratio gives rise to hyperfractionation. It is further demonstrated that, for intermediate α/β ratios in the tumor, a nonuniform fractionation scheme emerges, in which it is optimal to deliver different dose distributions in subsequent fractions. The intuitive explanation for this phenomenon is as follows: By varying the dose distribution in the tumor between fractions, the same total BED can be achieved with a lower physical dose. If it is possible to achieve this dose variation in the tumor without varying the dose in the normal tissue (which would have an adverse effect), the reduction in physical dose may lead to a net reduction of the normal tissue BED. For proton therapy, this is indeed possible to some degree because the entrance dose is mostly independent of the range of the proton pencil beam.Conclusions: The paper provides conceptual insight into the interdependence of optimal fractionation schemes and the spatial optimization of dose distributions. It demonstrates the emergence of nonuniform fractionation schemes that arise from the standard BED model when IMPT fields and fractionation scheme are optimized

  2. A SAND approach based on cellular computation models for analysis and optimization

    NASA Astrophysics Data System (ADS)

    Canyurt, O. E.; Hajela, P.

    2007-06-01

    Genetic algorithms (GAs) have received considerable recent attention in problems of design optimization. The mechanics of population-based search in GAs are highly amenable to implementation on parallel computers. The present article describes a fine-grained model of parallel GA implementation that derives from a cellular-automata-like computation. The central idea behind the cellular genetic algorithm (CGA) approach is to treat the GA population as being distributed over a 2-D grid of cells, with each member of the population occupying a particular cell and defining the state of that cell. Evolution of the cell state is tantamount to updating the design information contained in a cell site and, as in cellular automata computations, takes place on the basis of local interaction with neighbouring cells. A special focus of the article is in the use of cellular automata (CA)-based models for structural analysis in conjunction with the CGA approach to optimization. In such an approach, the analysis and optimization are evolved simultaneously in a unified cellular computational framework. The article describes the implementation of this approach and examines its efficiency in the context of representative structural optimization problems.

  3. Optimal source distribution for binaural synthesis over loudspeakers

    NASA Astrophysics Data System (ADS)

    Takeuchi, Takashi; Nelson, Philip A.

    2002-12-01

    When binaural sound signals are presented with loudspeakers, the system inversion involved gives rise to a number of problems such as a loss of dynamic range and a lack of robustness to small errors and room reflections. The amplification required by the system inversion results in loss of dynamic range. The control performance of such a system deteriorates severely due to small errors resulting from, e.g., misalignment of the system and individual differences in the head related transfer functions at certain frequencies. The required large sound radiation results in severe reflection which also reduces the control performance. A method of overcoming these fundamental problems is proposed in this paper. A conceptual monopole transducer is introduced whose position varies continuously as frequency varies. This gives a minimum processing requirement of the binaural signals for the control to be achieved and all the above problems either disappear or are minimized. The inverse filters have flat amplitude response and the reproduced sound is not colored even outside the relatively large ``sweet area.'' A number of practical solutions are suggested for the realization of such optimally distributed transducers. One of them is a discretization that enables the use of conventional transducer units.

  4. Optimal eavesdropping on quantum key distribution without quantum memory

    NASA Astrophysics Data System (ADS)

    Bocquet, Aurélien; Alléaume, Romain; Leverrier, Anthony

    2012-01-01

    We consider the security of the BB84 (Bennett and Brassard 1984 Proc. IEEE Int. Conf. on Computers, Systems, and Signal Processing pp 175-9), six-state (Bruß 1998 Phys. Rev. Lett. http://dx.doi.org/10.1103/PhysRevLett.81.3018) and SARG04 (Scarani et al 2004 Phys. Rev. Lett. http://dx.doi.org/10.1103/PhysRevLett.92.057901) quantum key distribution protocols when the eavesdropper does not have access to a quantum memory. In this case, Eve’s most general strategy is to measure her ancilla with an appropriate positive operator-valued measure designed to take advantage of the post-measurement information that will be released during the sifting phase of the protocol. After an optimization on all the parameters accessible to Eve, our method provides us with new bounds for the security of six-state and SARG04 against a memoryless adversary. In particular, for the six-state protocol we show that the maximum quantum bit error ratio for which a secure key can be extracted is increased from 12.6% (for collective attacks) to 20.4% with the memoryless assumption.

  5. Portfolio optimization in enhanced index tracking with goal programming approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  6. General approach and scope. [rotor blade design optimization

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.

  7. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  8. Robust optimization for water distribution systems least cost design

    NASA Astrophysics Data System (ADS)

    Perelman, Lina; Housh, Mashor; Ostfeld, Avi

    2013-10-01

    The objective of the least cost design problem of a water distribution system is to find its minimum cost with discrete diameters as decision variables and hydraulic controls as constraints. The goal of a robust least cost design is to find solutions which guarantee its feasibility independent of the data (i.e., under model uncertainty). A robust counterpart approach for linear uncertain problems is adopted in this study, which represents the uncertain stochastic problem as its deterministic equivalent. Robustness is controlled by a single parameter providing a trade-off between the probability of constraint violation and the objective cost. Two principal models are developed: uncorrelated uncertainty model with implicit design reliability, and correlated uncertainty model with explicit design reliability. The models are tested on three example applications and compared for uncertainty in consumers' demands. The main contribution of this study is the inclusion of the ability to explicitly account for different correlations between water distribution system demand nodes. In particular, it is shown that including correlation information in the design phase has a substantial advantage in seeking more efficient robust solutions.

  9. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  10. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  11. SolOpt: A Novel Approach to Solar Rooftop Optimization

    SciTech Connect

    Lisell, L.; Metzger, I.; Dean, J.

    2011-01-01

    Traditionally Photovoltaic Technology (PV) and Solar Hot Water Technology (SHW) have been designed with separate design tools, making it difficult to determine the appropriate mix of PV and SHW. A new tool developed at the National Renewable Energy Laboratory changes how the analysis is conducted through an integrated approach based on the life cycle cost effectiveness of each system. With 10 inputs someone with only basic knowledge of the building can simulate energy production from PV and SHW, and predict the optimal sizes of the systems. The user can select from four optimization criteria currently available: Greenhouse Gas Reduction, Net-Present Value, Renewable Energy Production, and Discounted Payback Period. SolOpt provides unique analysis capabilities that aren't currently available in any other software programs. Validation results with industry accepted tools for both SHW and PV are presented.

  12. Optimal approach to quantum communication using dynamic programming.

    PubMed

    Jiang, Liang; Taylor, Jacob M; Khaneja, Navin; Lukin, Mikhail D

    2007-10-30

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states. PMID:17959783

  13. Standardized approach for developing probabilistic exposure factor distributions

    SciTech Connect

    Maddalena, Randy L.; McKone, Thomas E.; Sohn, Michael D.

    2003-03-01

    The effectiveness of a probabilistic risk assessment (PRA) depends critically on the quality of input information that is available to the risk assessor and specifically on the probabilistic exposure factor distributions that are developed and used in the exposure and risk models. Deriving probabilistic distributions for model inputs can be time consuming and subjective. The absence of a standard approach for developing these distributions can result in PRAs that are inconsistent and difficult to review by regulatory agencies. We present an approach that reduces subjectivity in the distribution development process without limiting the flexibility needed to prepare relevant PRAs. The approach requires two steps. First, we analyze data pooled at a population scale to (1) identify the most robust demographic variables within the population for a given exposure factor, (2) partition the population data into subsets based on these variables, and (3) construct archetypal distributions for each subpopulation. Second, we sample from these archetypal distributions according to site- or scenario-specific conditions to simulate exposure factor values and use these values to construct the scenario-specific input distribution. It is envisaged that the archetypal distributions from step 1 will be generally applicable so risk assessors will not have to repeatedly collect and analyze raw data for each new assessment. We demonstrate the approach for two commonly used exposure factors--body weight (BW) and exposure duration (ED)--using data for the U.S. population. For these factors we provide a first set of subpopulation based archetypal distributions along with methodology for using these distributions to construct relevant scenario-specific probabilistic exposure factor distributions.

  14. Multiplicative approximations, optimal hypervolume distributions, and the choice of the reference point.

    PubMed

    Friedrich, Tobias; Neumann, Frank; Thyssen, Christian

    2015-01-01

    Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations. PMID:24654679

  15. Perspective: Codesign for materials science: An optimal learning approach

    NASA Astrophysics Data System (ADS)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  16. Optimal active power dispatch by network flow approach

    SciTech Connect

    Carvalho, M.F. ); Soares, S.; Ohishi, T. )

    1988-11-01

    In this paper the optimal active power dispatch problem is formulated as a nonlinear capacitated network flow problem with additional linear constraints. Transmission flow limits and both Kirchhoff's laws are taken into account. The problem is solved by a Generalized Upper Bounding technique that takes advantage of the network flow structure of the problem. The new approach has potential applications on power systems problems such as economic dispatch, load supplying capability, minimum load shedding, and generation-transmission reliability. The paper also reviews the use of transportation models for power system analysis. A detailed illustrative example is presented.

  17. Optimal reconstruction of reaction rates from particle distributions

    NASA Astrophysics Data System (ADS)

    Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2010-05-01

    Random walk particle tracking methodologies to simulate solute transport of conservative species constitute an attractive alternative for their computational efficiency and absence of numerical dispersion. Yet, problems stemming from the reconstruction of concentrations from particle distributions have typically prevented its use in reactive transport problems. The numerical problem mainly arises from the need to first reconstruct the concentrations of species/components from a discrete number of particles, which is an error prone process, and then computing a spatial functional of the concentrations and/or its derivatives (either spatial or temporal). Errors are then propagated, so that common strategies to reconstruct this functional require an unfeasible amount of particles when dealing with nonlinear reactive transport problems. In this context, this article presents a methodology to directly reconstruct this functional based on kernel density estimators. The methodology mitigates the error propagation in the evaluation of the functional by avoiding the prior estimation of the actual concentrations of species. The multivariate kernel associated with the corresponding functional depends on the size of the support volume, which defines the area over which a given particle can influence the functional. The shape of the kernel functions and the size of the support volume determines the degree of smoothing, which is optimized to obtain the best unbiased predictor of the functional using an iterative plug-in support volume selector. We applied the methodology to directly reconstruct the reaction rates of a precipitation/dissolution problem involving the mixing of two different waters carrying two aqueous species in chemical equilibrium and moving through a randomly heterogeneous porous medium.

  18. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  19. Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid

    NASA Astrophysics Data System (ADS)

    Padée, Adam; Kurek, Krzysztof; Zaremba, Krzysztof

    2013-08-01

    Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters.

  20. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  1. Silanization of glass chips—A factorial approach for optimization

    NASA Astrophysics Data System (ADS)

    Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.

    2013-12-01

    Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).

  2. A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail

    SciTech Connect

    Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson

    2013-08-15

    Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

  3. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  4. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  5. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGESBeta

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  6. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  7. Distributed Generators Allocation in Radial Distribution Systems with Load Growth using Loss Sensitivity Approach

    NASA Astrophysics Data System (ADS)

    Kumar, Ashwani; Vijay Babu, P.; Murty, V. V. S. N.

    2016-07-01

    Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of distributed generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. The objective of the paper is to reduce the power losses and improve the voltage profile of the radial distribution system with optimal allocation of the multiple DG in the system. The main contribution in this paper is (i) combined power loss sensitivity (CPLS) based method for multiple DG locations, (ii) determination of optimal sizes for multiple DG units at unity and lagging power factor, (iii) impact of DG installed at optimal, that is, combined load power factor on the system performance, (iv) impact of load growth on optimal DG planning, (v) Impact of DG integration in distribution systems on voltage stability index, (vi) Economic and technical Impact of DG integration in the distribution systems. The load growth factor has been considered in the study which is essential for planning and expansion of the existing systems. The technical and economic aspects are investigated in terms of improvement in voltage profile, reduction in total power losses, cost of energy loss, cost of power obtained from DG, cost of power intake from the substation, and savings in cost of energy loss. The results are obtained on IEEE 69-bus radial distribution systems and also compared with other existing methods.

  8. Optimized Switch Allocation to Improve the Restoration Energy in Distribution Systems

    NASA Astrophysics Data System (ADS)

    Dezaki, Hamed H.; Abyaneh, Hossein A.; Agheli, Ali; Mazlumi, Kazem

    2012-01-01

    In distribution networks switching devices play critical role in energy restoration and improving reliability indices. This paper presents a novel objective function to optimally allocate switches in electric power distribution systems. Identifying the optimized location of the switches is a nonlinear programming problem (NLP). In the proposed objective function a new auxiliary function is used to simplify the calculation of the objective function. The output of the auxiliary function is binary. The genetic algorithm (GA) optimization method is used to solve this optimization problem. The proposed method is applied to a real distribution network and the results reveal that the method is successful.

  9. Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables

    NASA Astrophysics Data System (ADS)

    Ouyang, Bo; Shang, Weiwei

    2016-03-01

    The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.

  10. A systems biology approach to radiation therapy optimization.

    PubMed

    Brahme, Anders; Lind, Bengt K

    2010-05-01

    During the last 20 years, the field of cellular and not least molecular radiation biology has been developed substantially and can today describe the response of heterogeneous tumors and organized normal tissues to radiation therapy quite well. An increased understanding of the sub-cellular and molecular response is leading to a more general systems biological approach to radiation therapy and treatment optimization. It is interesting that most of the characteristics of the tissue infrastructure, such as the vascular system and the degree of hypoxia, have to be considered to get an accurate description of tumor and normal tissue responses to ionizing radiation. In the limited space available, only a brief description of some of the most important concepts and processes is possible, starting from the key functional genomics pathways of the cell that are not only responsible for tumor development but also responsible for the response of the cells to radiation therapy. The key mechanisms for cellular damage and damage repair are described. It is further more discussed how these processes can be brought to inactivate the tumor without severely damaging surrounding normal tissues using suitable radiation modalities like intensity-modulated radiation therapy (IMRT) or light ions. The use of such methods may lead to a truly scientific approach to radiation therapy optimization, particularly when invivo predictive assays of radiation responsiveness becomes clinically available at a larger scale. Brief examples of the efficiency of IMRT are also given showing how sensitive normal tissues can be spared at the same time as highly curative doses are delivered to a tumor that is often radiation resistant and located near organs at risk. This new approach maximizes the probability to eradicate the tumor, while at the same time, adverse reactions in sensitive normal tissues are as far as possible minimized using IMRT with photons and light ions. PMID:20191284

  11. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  12. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time

  13. OPTIMAL SCHEDULING OF BOOSTER DISINFECTION IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Booster disinfection is the addition of disinfectant at locations distributed throughout a water distribution system. Such a strategy can reduce the mass of disinfectant required to maintain a detectable residual at points of consumption in the distribution system, which may lea...

  14. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  15. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074

  16. A Multi-agent Approach to Distribution System Restoration

    NASA Astrophysics Data System (ADS)

    Nagata, Takeshi; Tao, Yasuhiro; Sasaki, Hiroshi; Fujita, Hideki

    This paper proposes a multi-agent approach to decentralized power system restoration for a distribution system network. The proposed method consists of several Feeder Agents (FAGs) and Load Agents (LAGs). LAG corresponds to the customer load, while a FAG is developed to act as a manager for the decision process. From the simulation results, it can be seen the proposed multi-agent system could reach the right solution by making use of only local information. This means that the proposed multi-agent restoration system is a promising approach to more large-scale distribution networks.

  17. Spectral Approach to Optimal Estimation of the Global Average Temperature.

    NASA Astrophysics Data System (ADS)

    Shen, Samuel S. P.; North, Gerald R.; Kim, Kwang-Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data and a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 × 4, 6 × 4, 9 × 7, and 20 × 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study's sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset.

  18. Spectral approach to optimal estimation of the global average temperature

    SciTech Connect

    Shen, S.S.P.; North, G.R.; Kim, K.Y.

    1994-12-01

    Making use of EOF analysis and statistical optimal averaging techniques, the problem of random sampling error in estimating the global average temperature by a network of surface stations has been investigated. The EOF representation makes it unnecessary to use simplified empirical models of the correlation structure of temperature anomalies. If an adjustable weight is assigned to each station according to the criterion of minimum mean-square error, a formula for this error can be derived that consists of a sum of contributions from successive EOF modes. The EOFs were calculated from both observed data a noise-forced EBM for the problem of one-year and five-year averages. The mean square statistical sampling error depends on the spatial distribution of the stations, length of the averaging interval, and the choice of the weight for each station data stream. Examples used here include four symmetric configurations of 4 X 4, 5 X 4, 9 X 7, and 20 X 10 stations and the Angell-Korshover configuration. Comparisons with the 100-yr U.K. dataset show that correlations for the time series of the global temperature anomaly average between the full dataset and this study`s sparse configurations are rather high. For example, the 63-station Angell-Korshover network with uniform weighting explains 92.7% of the total variance, whereas the same network with optimal weighting can lead to 97.8% explained total variance of the U.K. dataset. 27 refs., 5 figs., 4 tabs.

  19. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical

  20. Optimal Placement of Distributed Generation Units in a Distribution System with Uncertain Topologies using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Donadel, Clainer Bravin; Fardin, Jussara Farias; Encarnação, Lucas Frizera

    2015-10-01

    In the literature, several papers propose new methodologies to determine the optimal placement/sizing of medium size Distributed Generation Units (DGs), using heuristic algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). However, in all methodologies, the optimal placement solution is strongly dependent of network topologies. Therefore, a specific solution is valid only for a particular network topology. Furthermore, such methodologies does not consider the presence of small DGs, whose connection point cannot be defined by Distribution Network Operators (DNOs). In this paper it is proposed a new methodology to determine the optimal location of medium size DGs in a distribution system with uncertain topologies, considering the particular behavior of small DGs, using Monte Carlo Simulation.

  1. Wireless Sensing, Monitoring and Optimization for Campus-Wide Steam Distribution

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Woodworth, Ken; Lake, Joe E

    2011-11-01

    The US Congress has passed legislation dictating that all government agencies establish a plan and process for improving energy efficiencies at their sites. In response to this legislation, Oak Ridge National Laboratory (ORNL) has recently conducted a pilot study to explore the deployment of a wireless sensor system for a real-time measurement-based energy efficiency optimization. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize energy delivery within the steam distribution system. Our approach leverages an integrated wireless sensor and real-time monitoring capability. We make real time state assessment on the steam trap health and steam flow estimate of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observing measurements of these sensors with state estimators for system health. Our assessments are based on a spectral-based energy signature scheme that interprets acoustic vibration sensor data to estimate steam flow rates and assess steam traps status. Experimental results show that the energy signature scheme has the potential to identify different steam trap states and it has sufficient sensitivity to estimate flow rate. Moreover, results indicate a nearly quadratic relationship over the test region between the overall energy signature factor and flow rate in the pipe. We are able to present the steam flow and steam trap status, sensor readings, and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. The goal is to achieve significant energy-saving in steam lines by monitoring and acting on leaking steam pipes/traps/valves. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  2. Collaborative Distributed Scheduling Approaches for Wireless Sensor Network

    PubMed Central

    Niu, Jianjun; Deng, Zhidong

    2009-01-01

    Energy constraints restrict the lifetime of wireless sensor networks (WSNs) with battery-powered nodes, which poses great challenges for their large scale application. In this paper, we propose a family of collaborative distributed scheduling approaches (CDSAs) based on the Markov process to reduce the energy consumption of a WSN. The family of CDSAs comprises of two approaches: a one-step collaborative distributed approach and a two-step collaborative distributed approach. The approaches enable nodes to learn the behavior information of its environment collaboratively and integrate sleep scheduling with transmission scheduling to reduce the energy consumption. We analyze the adaptability and practicality features of the CDSAs. The simulation results show that the two proposed approaches can effectively reduce nodes' energy consumption. Some other characteristics of the CDSAs like buffer occupation and packet delay are also analyzed in this paper. We evaluate CDSAs extensively on a 15-node WSN testbed. The test results show that the CDSAs conserve the energy effectively and are feasible for real WSNs. PMID:22408491

  3. Model reduction for chemical kinetics: An optimization approach

    SciTech Connect

    Petzold, L.; Zhu, W.

    1999-04-01

    The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.

  4. Approaches of Russian oil companies to optimal capital structure

    NASA Astrophysics Data System (ADS)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  5. An optimization approach and its application to compare DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying

    2015-02-01

    Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.

  6. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  7. [OPTIMAL APPROACH TO COMBINED TREATMENT OF PATIENTS WITH UROGENITAL PAPILLOMATOSIS].

    PubMed

    Breusov, A A; Kulchavenya, E V; Brizhatyukl, E V; Filimonov, P N

    2015-01-01

    The review analyzed 59 sources of domestic and foreign literature on the use of immunomodulator izoprinozin in treating patients infected with human papilloma virus, and the results of their own experience. The high prevalence of HPV and its role in the development of cervical cancer are shown, the mechanisms of HPV development and the host protection from this infection are described. The authors present approaches to the treatment of HPV-infected patients with particular attention to izoprinozin. Isoprinosine belongs to immunomodulators with antiviral activity. It inhibits the replication of viral DNA and RNA by binding to cell ribosomes and changing their stereochemical structure. HPV infection, especially in the early stages, may be successfully cured till the complete elimination of the virus. Inosine Pranobex (izoprinozin) having dual action and the most abundant evidence base, may be recognized as the optimal treatment option. PMID:26859953

  8. Structural Query Optimization in Native XML Databases: A Hybrid Approach

    NASA Astrophysics Data System (ADS)

    Haw, Su-Cheng; Lee, Chien-Sing

    As XML (eXtensible Mark-up Language) is gaining its popularity in data exchange over the Web, querying XML data has become an important issue to be addressed. In native XML databases (NXD), XML documents are usually modeled as trees and XML queries are typically specified in path expression. The primitive structural relationships are Parent-Child (P-C), Ancestor-Descendant (A-D), sibling and ordered query. Thus, a suitable and compact labeling scheme is crucial to identify these relationships and henceforth to process the query efficiently. We propose a novel labeling scheme consisting of < self-level:parent> to support all these relationships efficiently. Besides, we adopt the decomposition-matching-merging approach for structural query processing and propose a hybrid query optimization technique, TwigINLAB to process and optimize the twig query evaluation. Experimental results indicate that TwigINLAB can process all types of XML queries 15% better than the TwigStack algorithm in terms of execution time in most test cases.

  9. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  10. An improved ant colony optimization approach for optimization of process planning.

    PubMed

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874

  11. An Improved Ant Colony Optimization Approach for Optimization of Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874

  12. A normative inference approach for optimal sample sizes in decisions from experience.

    PubMed

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  13. A simple optimization approach for improving target dose homogeneity in intensity-modulated radiotherapy for sinonasal cancer

    PubMed Central

    Lu, Jia-Yang; Zhang, Ji-Yong; Li, Mei; Cheung, Michael Lok-Man; Li, Yang-Kang; Zheng, Jing; Huang, Bao-Tian; Zhang, Wu-Zhe

    2015-01-01

    Homogeneous target dose distribution in intensity-modulated radiotherapy (IMRT) for sinonasal cancer (SNC) is challenging to achieve. To solve this problem, we established and evaluated a basal-dose-compensation (BDC) optimization approach, in which the treatment plan is further optimized based on the initial plans. Generally acceptable initial IMRT plans for thirteen patients were created and further optimized individually by (1) the BDC approach and (2) a local-dose-control (LDC) approach, in which the initial plan is further optimized by addressing hot and cold spots. We compared the plan qualities, total planning time and monitor units (MUs) among the initial, BDC, LDC IMRT plans and volumetric modulated arc therapy (VMAT) plans. The BDC approach provided significantly superior dose homogeneity/conformity by 23%–48%/6%–9% compared with both the initial and LDC IMRT plans, as well as reduced doses to the organs at risk (OARs) by up to 18%, with acceptable MU numbers. Compared with VMAT, BDC IMRT yielded superior homogeneity, inferior conformity and comparable overall OAR sparing. The planning of BDC, LDC IMRT and VMAT required 30, 59 and 58 minutes on average, respectively. Our results indicated that the BDC optimization approach can achieve significantly better dose distributions with shorter planning time in the IMRT for SNC. PMID:26497620

  14. Optimizing denominator data estimation through a multimodel approach.

    PubMed

    Bryssinckx, Ward; Ducheyne, Els; Leirs, Herwig; Hendrickx, Guy

    2014-05-01

    To assess the risk of (zoonotic) disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and "true" livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively). This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources. PMID:24893035

  15. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  16. Optimal shield mass distribution for space radiation protection

    NASA Technical Reports Server (NTRS)

    Billings, M. P.

    1972-01-01

    Computational methods have been developed and successfully used for determining the optimum distribution of space radiation shielding on geometrically complex space vehicles. These methods have been incorporated in computer program SWORD for dose evaluation in complex geometry, and iteratively calculating the optimum distribution for (minimum) shield mass satisfying multiple acute and protected dose constraints associated with each of several body organs.

  17. The determination and optimization of (rutile) pigment particle size distributions

    NASA Technical Reports Server (NTRS)

    Richards, L. W.

    1972-01-01

    A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.

  18. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    SciTech Connect

    Murton, Jaclyn K.; Hanson, David T.; Turner, Tom; Powell, Amy Jo; James, Scott Carlton; Timlin, Jerilyn Ann; Scholle, Steven; August, Andrew; Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A.

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  19. Efficient network meta-analysis: a confidence distribution approach*

    PubMed Central

    Yang, Guang; Liu, Dungang; Liu, Regina Y.; Xie, Minge; Hoaglin, David C.

    2014-01-01

    Summary Network meta-analysis synthesizes several studies of multiple treatment comparisons to simultaneously provide inference for all treatments in the network. It can often strengthen inference on pairwise comparisons by borrowing evidence from other comparisons in the network. Current network meta-analysis approaches are derived from either conventional pairwise meta-analysis or hierarchical Bayesian methods. This paper introduces a new approach for network meta-analysis by combining confidence distributions (CDs). Instead of combining point estimators from individual studies in the conventional approach, the new approach combines CDs which contain richer information than point estimators and thus achieves greater efficiency in its inference. The proposed CD approach can e ciently integrate all studies in the network and provide inference for all treatments even when individual studies contain only comparisons of subsets of the treatments. Through numerical studies with real and simulated data sets, the proposed approach is shown to outperform or at least equal the traditional pairwise meta-analysis and a commonly used Bayesian hierarchical model. Although the Bayesian approach may yield comparable results with a suitably chosen prior, it is highly sensitive to the choice of priors (especially the prior of the between-trial covariance structure), which is often subjective. The CD approach is a general frequentist approach and is prior-free. Moreover, it can always provide a proper inference for all the treatment effects regardless of the between-trial covariance structure. PMID:25067933

  20. Practical Framework for an Electron Beam Induced Current Technique Based on a Numerical Optimization Approach

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Hideshi; Soeda, Takeshi

    2015-03-01

    A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.

  1. Mapping the distribution of malaria: current approaches and future directions

    USGS Publications Warehouse

    Johnson, Leah R.; Lafferty, Kevin D.; McNally, Amy; Mordecai, Erin A.; Paaijmans, Krijn P.; Pawar, Samraat; Ryan, Sadie J.

    2015-01-01

    Mapping the distribution of malaria has received substantial attention because the disease is a major source of illness and mortality in humans, especially in developing countries. It also has a defined temporal and spatial distribution. The distribution of malaria is most influenced by its mosquito vector, which is sensitive to extrinsic environmental factors such as rainfall and temperature. Temperature also affects the development rate of the malaria parasite in the mosquito. Here, we review the range of approaches used to model the distribution of malaria, from spatially explicit to implicit, mechanistic to correlative. Although current methods have significantly improved our understanding of the factors influencing malaria transmission, significant gaps remain, particularly in incorporating nonlinear responses to temperature and temperature variability. We highlight new methods to tackle these gaps and to integrate new data with models.

  2. Exploring trade-offs between VMAT dose quality and delivery efficiency using a network optimization approach

    NASA Astrophysics Data System (ADS)

    Salari, Ehsan; Wala, Jeremiah; Craft, David

    2012-09-01

    To formulate and solve the fluence-map merging procedure of the recently-published VMAT treatment-plan optimization method, called vmerge, as a bi-criteria optimization problem. Using an exact merging method rather than the previously-used heuristic, we are able to better characterize the trade-off between the delivery efficiency and dose quality. vmerge begins with a solution of the fluence-map optimization problem with 180 equi-spaced beams that yields the ‘ideal’ dose distribution. Neighboring fluence maps are then successively merged, meaning that they are added together and delivered as a single map. The merging process improves the delivery efficiency at the expense of deviating from the initial high-quality dose distribution. We replace the original merging heuristic by considering the merging problem as a discrete bi-criteria optimization problem with the objectives of maximizing the treatment efficiency and minimizing the deviation from the ideal dose. We formulate this using a network-flow model that represents the merging problem. Since the problem is discrete and thus non-convex, we employ a customized box algorithm to characterize the Pareto frontier. The Pareto frontier is then used as a benchmark to evaluate the performance of the standard vmerge algorithm as well as two other similar heuristics. We test the exact and heuristic merging approaches on a pancreas and a prostate cancer case. For both cases, the shape of the Pareto frontier suggests that starting from a high-quality plan, we can obtain efficient VMAT plans through merging neighboring fluence maps without substantially deviating from the initial dose distribution. The trade-off curves obtained by the various heuristics are contrasted and shown to all be equally capable of initial plan simplifications, but to deviate in quality for more drastic efficiency improvements. This work presents a network optimization approach to the merging problem. Contrasting the trade-off curves of the

  3. A Direct Method for Fuel Optimal Maneuvers of Distributed Spacecraft in Multiple Flight Regimes

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Cooley, D. S.; Guzman, Jose J.

    2005-01-01

    We present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a non-linear dynamics model and parameterize the problem to allow the method to be applicable to multiple flight regimes including low-Earth orbits, highly-elliptic orbits (HEO), Lagrange point orbits, and interplanetary trajectories. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as large constellations. Semianalytical derivatives are derived for the changes in the total AV with respect to changes in the independent variables. We also apply a set of constraints to ensure that the fuel expenditure is equalized over the spacecraft in formation. We conclude with several examples and present optimal maneuver sequences for both a HE0 and libration point formation.

  4. On the practical convergence of coda-based correlations: a window optimization approach

    NASA Astrophysics Data System (ADS)

    Chaput, J.; Clerc, V.; Campillo, M.; Roux, P.; Knox, H.

    2016-02-01

    We present a novel optimization approach to improve the convergence of interstation coda correlation functions towards the medium's empirical Green's function. For two stations recording a series of impulsive events in a multiply scattering medium, we explore the impact of coda window selection through a Markov Chain Monte Carlo scheme, with the aim of generating a gather of correlation functions that is the most coherent and symmetric over events, thus recovering intuitive elements of the interstation Green's function without any nonlinear post-processing techniques. This approach is tested here for a 2-D acoustic finite difference model, where a much improved correlation function is obtained, as well as for a database of small impulsive icequakes recorded on Erebus Volcano, Antarctica, where similar robust results are shown. The average coda solutions, as deduced from the posterior probability distributions of the optimization, are further representative of the scattering strength of the medium, with stronger scattering resulting in a slightly delayed overall coda sampling. The recovery of singly scattered arrivals in the coda of correlation functions are also shown to be possible through this approach, and surface wave reflections from outer craters on Erebus volcano were mapped in this fashion. We also note that, due to the improvement of correlation functions over subsequent events, this approach can further be used to improve the resolution of passive temporal monitoring.

  5. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  6. Distributed Generation Dispatch Optimization under VariousElectricity Tariffs

    SciTech Connect

    Firestone, Ryan; Marnay, Chris

    2007-05-01

    The on-site generation of electricity can offer buildingowners and occupiers financial benefits as well as social benefits suchas reduced grid congestion, improved energy efficiency, and reducedgreenhouse gas emissions. Combined heat and power (CHP), or cogeneration,systems make use of the waste heat from the generator for site heatingneeds. Real-time optimal dispatch of CHP systems is difficult todetermine because of complicated electricity tariffs and uncertainty inCHP equipment availability, energy prices, and system loads. Typically,CHP systems use simple heuristic control strategies. This paper describesa method of determining optimal control in real-time and applies it to alight industrial site in San Diego, California, to examine: 1) the addedbenefit of optimal over heuristic controls, 2) the price elasticity ofthe system, and 3) the site-attributable greenhouse gas emissions, allunder three different tariff structures. Results suggest that heuristiccontrols are adequate under the current tariff structure and relativelyhigh electricity prices, capturing 97 percent of the value of thedistributed generation system. Even more value could be captured bysimply not running the CHP system during times of unusually high naturalgas prices. Under hypothetical real-time pricing of electricity,heuristic controls would capture only 70 percent of the value ofdistributed generation.

  7. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  8. Communication Optimizations for a Wireless Distributed Prognostic Framework

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2009-01-01

    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.

  9. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  10. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    PubMed

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402

  11. Optimized variable source-profile approach for source apportionment

    NASA Astrophysics Data System (ADS)

    Marmur, Amit; Mulholland, James A.; Russell, Armistead G.

    An expanded chemical mass balance (CMB) approach for PM 2.5 source apportionment is presented in which both the local source compositions and corresponding contributions are determined from ambient measurements and initial estimates of source compositions using a global-optimization mechanism. Such an approach can serve as an alternative to using predetermined (measured) source profiles, as traditionally used in CMB applications, which are not always representative of the region and/or time period of interest. Constraints based on ranges of typical source profiles are used to ensure that the compositions identified are representative of sources and are less ambiguous than the factors/sources identified by typical factor analysis (FA) techniques. Gas-phase data (SO 2, CO and NO y) are also used, as these data can assist in identifying sources. Impacts of identified sources are then quantified by minimizing the weighted-error between apportioned and measured levels of the fitting species. This technique was applied to a dataset of PM 2.5 measurements at the former Atlanta Supersite (Jefferson Street site), to apportion PM 2.5 mass into nine source categories. Good agreement is found when these source impacts are compared with those derived based on measured source profiles as well as those derived using a current FA technique, Positive Matrix Factorization. The proposed method can be used to assess the representativeness of measured source-profiles and to help identify those profiles that may be in significant error, as well as to quantify uncertainties in source-impact estimates, due in part to uncertainties in source compositions.

  12. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  13. Optimizing Distributed Practice: Theoretical Analysis and Practical Implications

    ERIC Educational Resources Information Center

    Cepeda, Nicholas J.; Coburn, Noriko; Rohrer, Doug; Wixted, John T.; Mozer, Michael C,; Pashler, Harold

    2009-01-01

    More than a century of research shows that increasing the gap between study episodes using the same material can enhance retention, yet little is known about how this so-called distributed practice effect unfolds over nontrivial periods. In two three-session laboratory studies, we examined the effects of gap on retention of foreign vocabulary,…

  14. Tomographic Approach in Three-Orthogonal-Basis Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Liang, Wen-Ye; Wen, Hao; Yin, Zhen-Qiang; Chen, Hua; Li, Hong-Wei; Chen, Wei; Han, Zheng-Fu

    2015-09-01

    At present, there is an increasing awareness of some three-orthogonal-basis quantum key distribution protocols, such as, the reference-frame-independent (RFI) protocol and the six-state protocol. For secure key rate estimations of these protocols, there are two methods: one is the conventional approach, and another is the tomographic approach. However, a comparison between these two methods has not been given yet. In this work, with the general model of rotation channel, we estimate the key rate using conventional and tomographic methods respectively. Results show that conventional estimation approach in RFI protocol is equivalent to tomographic approach only in the case of that one of three orthogonal bases is always aligned. In other cases, tomographic approach performs much better than the respective conventional approaches of the RFI protocol and the six-state protocol. Furthermore, based on the experimental data, we illustrate the deep connections between tomography and conventional RFI approach representations. Supported by the National Basic Research Program of China under Grant Nos. 2011CBA00200 and 2011CB921200 and the National Natural Science Foundation of China under Grant Nos. 60921091, 61475148, and 61201239 and Zhejiang Natural Science Foundation under Grant No. LQ13F050005

  15. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  16. Curricular policy as a collective effects problem: A distributional approach.

    PubMed

    Penner, Andrew M; Domina, Thurston; Penner, Emily K; Conley, AnneMarie

    2015-07-01

    Current educational policies in the United States attempt to boost student achievement and promote equality by intensifying the curriculum and exposing students to more advanced coursework. This paper investigates the relationship between one such effort - California's push to enroll all 8th grade students in Algebra - and the distribution of student achievement. We suggest that this effort is an instance of a "collective effects" problem, where the population-level effects of a policy are different from its effects at the individual level. In such contexts, we argue that it is important to consider broader population effects as well as the difference between "treated" and "untreated" individuals. To do so, we present differences in inverse propensity score weighted distributions investigating how this curricular policy changed the distribution of student achievement. We find that California's attempt to intensify the curriculum did not raise test scores at the bottom of the distribution, but did lower scores at the top of the distribution. These results highlight the efficacy of inverse propensity score weighting approaches for examining distributional differences, and provide a cautionary tale for curricular intensification efforts and other policies with collective effects. PMID:26004485

  17. Optimizing Distributed Energy Resources and building retrofits with the strategic DER-CAModel

    DOE PAGESBeta

    Stadler, M.; Groissböck, M.; Cardoso, G.; Marnay, C.

    2014-08-05

    The pressuring need to reduce the import of fossil fuels as well as the need to dramatically reduce CO2 emissions in Europe motivated the European Commission (EC) to implement several regulations directed to building owners. Most of these regulations focus on increasing the number of energy efficient buildings, both new and retrofitted, since retrofits play an important role in energy efficiency. Overall, this initiative results from the realization that buildings will have a significant impact in fulfilling the 20/20/20-goals of reducing the greenhouse gas emissions by 20%, increasing energy efficiency by 20%, and increasing the share of renewables to 20%,more » all by 2020. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is an optimization tool used to support DER investment decisions, typically by minimizing total annual costs or CO2 emissions while providing energy services to a given building or microgrid site. This document shows enhancements made to DER-CAM to consider building retrofit measures along with DER investment options. Specifically, building shell improvement options have been added to DER-CAM as alternative or complementary options to investments in other DER such as PV, solar thermal, combined heat and power, or energy storage. The extension of the mathematical formulation required by the new features introduced in DER-CAM is presented and the resulting model is demonstrated at an Austrian Campus building by comparing DER-CAM results with and without building shell improvement options. Strategic investment results are presented and compared to the observed investment decision at the test site. Results obtained considering building shell improvement options suggest an optimal weighted average U value of about 0.53 W/(m2K) for the test site. This result is approximately 25% higher than what is currently observed in the building, suggesting that the retrofits made in 2002 were not optimal. Furthermore, the results obtained with

  18. Optimizing Distributed Energy Resources and building retrofits with the strategic DER-CAModel

    SciTech Connect

    Stadler, M.; Groissböck, M.; Cardoso, G.; Marnay, C.

    2014-08-05

    The pressuring need to reduce the import of fossil fuels as well as the need to dramatically reduce CO2 emissions in Europe motivated the European Commission (EC) to implement several regulations directed to building owners. Most of these regulations focus on increasing the number of energy efficient buildings, both new and retrofitted, since retrofits play an important role in energy efficiency. Overall, this initiative results from the realization that buildings will have a significant impact in fulfilling the 20/20/20-goals of reducing the greenhouse gas emissions by 20%, increasing energy efficiency by 20%, and increasing the share of renewables to 20%, all by 2020. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is an optimization tool used to support DER investment decisions, typically by minimizing total annual costs or CO2 emissions while providing energy services to a given building or microgrid site. This document shows enhancements made to DER-CAM to consider building retrofit measures along with DER investment options. Specifically, building shell improvement options have been added to DER-CAM as alternative or complementary options to investments in other DER such as PV, solar thermal, combined heat and power, or energy storage. The extension of the mathematical formulation required by the new features introduced in DER-CAM is presented and the resulting model is demonstrated at an Austrian Campus building by comparing DER-CAM results with and without building shell improvement options. Strategic investment results are presented and compared to the observed investment decision at the test site. Results obtained considering building shell improvement options suggest an optimal weighted average U value of about 0.53 W/(m2K) for the test site. This result is approximately 25% higher than what is currently observed in the building, suggesting that the retrofits made in 2002 were not optimal. Furthermore

  19. A Rawlsian approach to distribute responsibilities in networks.

    PubMed

    Doorn, Neelke

    2010-06-01

    Due to their non-hierarchical structure, socio-technical networks are prone to the occurrence of the problem of many hands. In the present paper an approach is introduced in which people's opinions on responsibility are empirically traced. The approach is based on the Rawlsian concept of Wide Reflective Equilibrium (WRE) in which people's considered judgments on a case are reflectively weighed against moral principles and background theories, ideally leading to a state of equilibrium. Application of the method to a hypothetical case with an artificially constructed network showed that it is possible to uncover the relevant data to assess a consensus amongst people in terms of their individual WRE. It appeared that the moral background theories people endorse are not predictive for their actual distribution of responsibilities but that they indicate ways of reasoning and justifying outcomes. Two ways of ascribing responsibilities were discerned, corresponding to two requirements of a desirable responsibility distribution: fairness and completeness. Applying the method triggered learning effects, both with regard to conceptual clarification and moral considerations, and in the sense that it led to some convergence of opinions. It is recommended to apply the method to a real engineering case in order to see whether this approach leads to an overlapping consensus on a responsibility distribution which is justifiable to all and in which no responsibilities are left unfulfilled, therewith trying to contribute to the solution of the problem of many hands. PMID:19626463

  20. A Rawlsian Approach to Distribute Responsibilities in Networks

    PubMed Central

    2009-01-01

    Due to their non-hierarchical structure, socio-technical networks are prone to the occurrence of the problem of many hands. In the present paper an approach is introduced in which people’s opinions on responsibility are empirically traced. The approach is based on the Rawlsian concept of Wide Reflective Equilibrium (WRE) in which people’s considered judgments on a case are reflectively weighed against moral principles and background theories, ideally leading to a state of equilibrium. Application of the method to a hypothetical case with an artificially constructed network showed that it is possible to uncover the relevant data to assess a consensus amongst people in terms of their individual WRE. It appeared that the moral background theories people endorse are not predictive for their actual distribution of responsibilities but that they indicate ways of reasoning and justifying outcomes. Two ways of ascribing responsibilities were discerned, corresponding to two requirements of a desirable responsibility distribution: fairness and completeness. Applying the method triggered learning effects, both with regard to conceptual clarification and moral considerations, and in the sense that it led to some convergence of opinions. It is recommended to apply the method to a real engineering case in order to see whether this approach leads to an overlapping consensus on a responsibility distribution which is justifiable to all and in which no responsibilities are left unfulfilled, therewith trying to contribute to the solution of the problem of many hands. PMID:19626463

  1. Sub-Optimal Ensemble Filters and distributed hydrologic modeling: a new challenge in flood forecasting

    NASA Astrophysics Data System (ADS)

    Baroncini, F.; Castelli, F.

    2009-09-01

    Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence

  2. Minimum entropy approach to denoising time-frequency distributions

    NASA Astrophysics Data System (ADS)

    Aviyente, Selin; Williams, William J.

    2001-11-01

    Signals used in time-frequency analysis are usually corrupted by noise. Therefore, denoising the time-frequency representation is a necessity for producing readable time-frequency images. Denoising is defined as the operation of smoothing a noisy signal or image for producing a noise free representation. Linear smoothing of time-frequency distributions (TFDs) suppresses noise at the expense of considerable smearing of the signal components. For this reason, nonlinear denoising has been preferred. A common example to nonlinear denoising methods is the wavelet thresholding. In this paper, we introduce an entropy based approach to denoising time-frequency distributions. This new approach uses the spectrogram decomposition of time-frequency kernels proposed by Cunningham and Williams.In order to denoise the time-frequency distribution, we combine those spectrograms with smallest entropy values, thus ensuring that each spectrogram is well concentrated on the time-frequency plane and contains as little noise as possible. Renyi entropy is used as the measure to quantify the complexity of each spectrogram. The threshold for the number of spectrograms to combine is chosen adaptively based on the tradeoff between entropy and variance. The denoised time-frequency distributions for several signals are shown to demonstrate the effectiveness of the method. The improvement in performance is quantitatively evaluated.

  3. Optimization of distribution transformer efficiency characteristics. Final report, March 1979

    SciTech Connect

    Not Available

    1980-06-01

    A method for distribution transformer loss evaluation was derived. The total levalized annual cost method was used and was extended to account properly for conditions of energy cost inflation, peak load growth, and transformer changeout during the evaluation period. The loss costs included were the no-load and load power losses, no-load and load reactive losses, and the energy cost of regulation. The demand and energy components of loss costs were treated separately to account correctly for the diversity of load losses and energy cost inflation. The complete distribution transformer loss evaluation equation is shown, with the nomenclature and definitions for the parameters provided. Tasks described are entitled: Establish Loss Evaluation Techniques; Compile System Cost Parameters; Compile Load Parameters and Loading Policies; Develop Transformer Cost/Performance Relationship; Define Characteristics of Multiple Efficiency Transformer Package; Minimize Life Cycle Cost Based on Single Efficiency Characteristic Transformer Design; Minimize Life Cycle Cost Based on Multiple Efficiency Characteristic Transformer Design; and Interpretation.

  4. Co-optimal Distribution of Leaf Nitrogen and Hydraulic Conductance in Plant Canopies

    NASA Astrophysics Data System (ADS)

    Peltoniemi, M.; Medlyn, B. E.; Duursma, R.

    2012-12-01

    Leaf properties vary significantly within plant canopies, due to the strong gradient in light availability through the canopy. Leaves near the canopy top have high nitrogen (N) and phosphorus content per unit leaf area, high leaf mass per area, and high photosynthetic capacity, compared to leaves deeper in the canopy. Variation of leaf properties has been explained by the optimal distribution of resources, particularly nitrogen, throughout the canopy. Studies of the optimal distribution of leaf nitrogen (N) within canopies have shown that, in the absence of other constraints, the optimal distribution of N is proportional to light. This is an important assumption in the big-leaf models of canopy photosynthesis and widely applied in current land-surface models. However, measurements have shown that the gradient of N in real canopies is shallower than the optimal distribution. One thing that has not yet been considered is how the constraints on water supply to leaves influence leaf properties in the canopy. Leaves with high stomatal conductance tend to have high stomatal conductance and transpiration rate, which suggests that for the the efficient operation of canopy, high light leaves should be serviced by more water. The rate of water transport depends on the hydraulic conductance of the soil-leaf pathway. We extend the work on optimal nitrogen gradients by considering the optimal co-allocation of nitrogen and water supply within plant canopies. We developed a simple "toy" two-leaf canopy model and optimised the distribution of N and hydraulic conductance (K) between the two leaves. We asked whether the hydraulic constraints to water supply can explain shallow N gradients in canopies. We found that the optimal N distribution within plant canopies is proportional to the light distribution only if hydraulic conductance is also optimally distributed. The optimal distribution of K is that where K and N are both proportional to incident light, such that optimal K is

  5. Optimization of tomographic reconstruction workflows on geographically distributed resources.

    PubMed

    Bicer, Tekin; Gürsoy, Dogˇa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T

    2016-07-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can

  6. Optimizing parametrial aperture design utilizing HDR brachytherapy isodose distribution.

    PubMed

    Chapman, Katherine L; Ohri, Nitin; Showalter, Timothy N; Doyle, Laura A

    2013-03-01

    Treatment of cervical cancer includes combination of external beam radiation therapy (EBRT) and brachytherapy (BRT). Traditionally, coronal images displaying dose distribution from a ring and tandem (R&T) implant aid in construction of parametrial boost fields. This research aimed to evaluate a method of shaping parametrial fields utilizing contours created from the high-dose-rate (HDR) BRT dose distribution. Eleven patients receiving HDR-BRT via R&T were identified. The BRT and EBRT CT scans were sent to FocalSim (v4.62)(®) and fused based on bony anatomy. The contour of the HDR isodose line was transferred to the EBRT scan. The EBRT scan was sent to CMS-XIO (v4.62)(®) for planning. This process provides an automated, potentially more accurate method of matching the medial parametrial border to the HDR dose distribution. This allows for a 3D-view of dose from HDR-BRT for clinical decision-making, utilizes a paperless process and saves time over the traditional technique. PMID:23634156

  7. Distributed Energy Resources On-Site Optimization for Commercial Buildings with Electric and Thermal Storage Technologies

    SciTech Connect

    Lacommare, Kristina S H; Stadler, Michael; Aki, Hirohisa; Firestone, Ryan; Lai, Judy; Marnay, Chris; Siddiqui, Afzal

    2008-05-15

    The addition of storage technologies such as flow batteries, conventional batteries, and heat storage can improve the economic as well as environmental attractiveness of on-site generation (e.g., PV, fuel cells, reciprocating engines or microturbines operating with or without CHP) and contribute to enhanced demand response. In order to examine the impact of storage technologies on demand response and carbon emissions, a microgrid's distributed energy resources (DER) adoption problem is formulated as a mixed-integer linear program that has the minimization of annual energy costs as its objective function. By implementing this approach in the General Algebraic Modeling System (GAMS), the problem is solved for a given test year at representative customer sites, such as schools and nursing homes, to obtain not only the level of technology investment, but also the optimal hourly operating schedules. This paper focuses on analysis of storage technologies in DER optimization on a building level, with example applications for commercial buildings. Preliminary analysis indicates that storage technologies respond effectively to time-varying electricity prices, i.e., by charging batteries during periods of low electricity prices and discharging them during peak hours. The results also indicate that storage technologies significantly alter the residual load profile, which can contribute to lower carbon emissions depending on the test site, its load profile, and its adopted DER technologies.

  8. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    NASA Astrophysics Data System (ADS)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  9. Optimal exploitation of spatially distributed trophic resources and population stability

    USGS Publications Warehouse

    Basset, A.; Fedele, M.; DeAngelis, D.L.

    2002-01-01

    The relationships between optimal foraging of individuals and population stability are addressed by testing, with a spatially explicit model, the effect of patch departure behaviour on individual energetics and population stability. A factorial experimental design was used to analyse the relevance of the behavioural factor in relation to three factors that are known to affect individual energetics; i.e. resource growth rate (RGR), assimilation efficiency (AE), and body size of individuals. The factorial combination of these factors produced 432 cases, and 1000 replicate simulations were run for each case. Net energy intake rates of the modelled consumers increased with increasing RGR, consumer AE, and consumer body size, as expected. Moreover, through their patch departure behaviour, by selecting the resource level at which they departed from the patch, individuals managed to substantially increase their net energy intake rates. Population stability was also affected by the behavioural factors and by the other factors, but with highly non-linear responses. Whenever resources were limiting for the consumers because of low RGR, large individual body size or low AE, population density at the equilibrium was directly related to the patch departure behaviour; on the other hand, optimal patch departure behaviour, which maximised the net energy intake at the individual level, had a negative influence on population stability whenever resource availability was high for the consumers. The consumer growth rate (r) and numerical dynamics, as well as the spatial and temporal fluctuations of resource density, which were the proximate causes of population stability or instability, were affected by the behavioural factor as strongly or even more strongly than by the others factors considered here. Therefore, patch departure behaviour can act as a feedback control of individual energetics, allowing consumers to optimise a potential trade-off between short-term individual fitness

  10. Method for computing the optimal signal distribution and channel capacity.

    PubMed

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet. PMID:26193496

  11. Curricular Policy as a Collective Effects Problem: A Distributional Approach

    PubMed Central

    Penner, Andrew M.; Domina, Thurston; Penner, Emily K.; Conley, AnneMarie

    2015-01-01

    Current educational policies in the United States attempt to boost student achievement and promote equality by intensifying the curriculum and exposing students to more advanced coursework. This paper investigates the relationship between one such effort -- California's push to enroll all 8th grade students in Algebra -- and the distribution of student achievement. We suggest that this effort is an instance of a “collective effects” problem, where the population-level effects of a policy are different from its effects at the individual level. In such contexts, we argue that it is important to consider broader population effects as well as the difference between “treated” and “untreated” individuals. To do so, we present differences in inverse propensity score weighted distributions to investigate how this curricular policy changed the distribution of student achievement more broadly. We find that California's attempt to intensify the curriculum did not raise test scores at the bottom of the distribution, but did lower scores at the top of the distribution. These results highlight the efficacy of inverse propensity score weighting approaches for estimating collective effects, and provide a cautionary tale for curricular intensification efforts and other policies with collective effects. PMID:26004485

  12. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  13. Optimizing Spillway Capacity With an Estimated Distribution of Floods

    NASA Astrophysics Data System (ADS)

    Resendiz-Carrillo, Daniel; Lave, Lester B.

    1987-11-01

    A model of social cost minimizing spillway capacity for dams is constructed using (1) the estimated distribution of peak flows from historical data, (2) the estimated relationship between spillway capacity and cost, and (3) a characterization of downstream flood damage from dam failure. Net social cost is the sum of construction costs and expected flood damage. This model is applied to data for the Rio Grande River at Embudo, New Mexico. Minimum social cost is attained at a spillway capacity much smaller than that needed to handle a probably maximum flood.

  14. Optimizing spillway capacity with an estimated distribution of floods

    SciTech Connect

    Resendiz-Carrillo, D.; Lave, L.B.

    1987-11-01

    A model of social cost minimizing spillway capacity for dams is constructed using (1) the estimated distribution of peak flows from historical data, (2) the estimated relationship between spillway capacity and cost, and (3) a characterization of downstream flood damage from dam failure. Net social cost is the sum of construction costs and expected flood damage. This model is applied to data for the Rio Grande River at Embudo, New Mexico. Minimum social cost is attained at a spillway capacity must smaller than that needed to handle a probably maximum flood.

  15. The optimization of measurement device independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Ma, Hai-Qiang; Jiao, Rong-Zhen

    2016-04-01

    Measurement device independent quantum key distribution (MDI-QKD) is a promising method for realistic quantum communication which could remove all the side-channel attacks from the imperfections of the devices. Here in this study, we theoretically analyzed the performance of the MDI-QKD system. The asymptotic case rate with the increment of the transmission distance at different polarization misalignment, background count rate and intensity is calculated respectively. The result may provide important parameters for practical application of quantum communications.

  16. A novel, optimized approach of voxel division for water vapor tomography

    NASA Astrophysics Data System (ADS)

    Yao, Yibin; Zhao, Qingzhi

    2016-03-01

    Water vapor information with highly spatial and temporal resolution can be acquired using Global Navigation Satellite System (GNSS) water vapor tomography technique. Usually, the targeted tomographic area is discretized into a number of voxels and the water vapor distribution can be reconstructed using a large number of GNSS signals which penetrate the entire tomographic area. Due to the influence of geographic distribution of receivers and geometric location of satellite constellation, many voxels located at the bottom and the side of research area are not crossed by signals, which would undermine the quality of tomographic result. To alleviate this problem, a novel, optimized approach of voxel division is here proposed which increases the number of voxels crossed by signals. On the vertical axis, a 3D water vapor profile is utilized, which is derived from radiosonde data for many years, to identify the maximum height of tomography space. On the horizontal axis, the total number of voxel crossed by signal is enhanced, based on the concept of non-uniform symmetrical division of horizontal voxels. In this study, tomographic experiments are implemented using GPS data from Hong Kong Satellite Positioning Reference Station Network, and tomographic result is compared with water vapor derived from radiosonde and European Center for Medium-Range Weather Forecasting (ECMWF). The result shows that the Integrated Water Vapour (IWV), RMS, and error distribution of the proposed approach are better than that of traditional method.

  17. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    SciTech Connect

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-17

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  18. On the optimality of individual entangling-probe attacks against BB84 quantum key distribution

    NASA Astrophysics Data System (ADS)

    Herbauts, I. M.; Bettelli, S.; Hã¼bel, H.; Peev, M.

    2008-02-01

    Some MIT researchers [Phys. Rev. A 75, 042327 (2007)] have recently claimed that their implementation of the Slutsky-Brandt attack [Phys. Rev. A 57, 2383 (1998); Phys. Rev. A 71, 042312 (2005)] to the BB84 quantum-key-distribution (QKD) protocol puts the security of this protocol “to the test” by simulating “the most powerful individual-photon attack” [Phys. Rev. A 73, 012315 (2006)]. A related unfortunate news feature by a scientific journal [G. Brumfiel, Quantum cryptography is hacked, News @ Nature (april 2007); Nature 447, 372 (2007)] has spurred some concern in the QKD community and among the general public by misinterpreting the implications of this work. The present article proves the existence of a stronger individual attack on QKD protocols with encrypted error correction, for which tight bounds are shown, and clarifies why the claims of the news feature incorrectly suggest a contradiction with the established “old-style” theory of BB84 individual attacks. The full implementation of a quantum cryptographic protocol includes a reconciliation and a privacy-amplification stage, whose choice alters in general both the maximum extractable secret and the optimal eavesdropping attack. The authors of [Phys. Rev. A 75, 042327 (2007)] are concerned only with the error-free part of the so-called sifted string, and do not consider faulty bits, which, in the version of their protocol, are discarded. When using the provably superior reconciliation approach of encrypted error correction (instead of error discard), the Slutsky-Brandt attack is no more optimal and does not “threaten” the security bound derived by Lütkenhaus [Phys. Rev. A 59, 3301 (1999)]. It is shown that the method of Slutsky and collaborators [Phys. Rev. A 57, 2383 (1998)] can be adapted to reconciliation with error correction, and that the optimal entangling probe can be explicitly found. Moreover, this attack fills Lütkenhaus bound, proving that it is tight (a fact which was not

  19. An Informatics Approach to Demand Response Optimization in Smart Grids

    SciTech Connect

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  20. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  1. Experiments with ROPAR, an approach for probabilistic analysis of the optimal solutions' robustness

    NASA Astrophysics Data System (ADS)

    Marquez, Oscar; Solomatine, Dimitri

    2016-04-01

    Robust optimization is defined as the search for solutions and performance results which remain reasonably unchanged when exposed to uncertain conditions such as natural variability in input variables, parameter drifts during operation time, model sensitivities and others [1]. In the present study we follow the approach named ROPAR (multi-objective robust optimization allowing for explicit analysis of robustness (see online publication [2]). Its main idea is in: a) sampling the vectors of uncertain factors; b) solving MOO problem for each of them obtaining multiple Pareto sets; c) analysing the statistical properties (distributions) of the subsets of these Pareto sets corresponding to different conditions (e.g. based on constraints formulated for the objective functions values of other system variables); d) selecting the robust solutions. The paper presents the results of experiments with the two case studies: 1) a benchmark function ZDT1 (with an uncertain factor) often used in algorithms comparisons, and 2) a problem of drainage network rehabilitation that uses SWMM hydrodynamic model (the rainfall is assumed to be an uncertain factor). This study is partly supported by the FP7 European Project WeSenseIt Citizen Water Observatory (www.http://wesenseit.eu/) and the CONACYT (Mexico's National Council of Science and Technology) supporting the PhD study of the first author. References [1] H.G.Beyer and B. Sendhoff. "Robust optimization - A comprehensive survey." Comput. Methods Appl. Mech. Engrg., 2007: 3190-3218. [2] D.P. Solomatine (2012). An approach to multi-objective robust optimization allowing for explicit analysis of robustness (ROPAR). UNESCO-IHE. Online publication. Web: https://www.unesco-ihe.org/sites/default/files/solomatine-ropar.pdf

  2. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance

  3. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    PubMed

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-01

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation. PMID:26583559

  4. A Novel Paradigm for Computer-Aided Design: TRIZ-Based Hybridization of Topologically Optimized Density Distributions

    NASA Astrophysics Data System (ADS)

    Cardillo, A.; Cascini, G.; Frillici, F. S.; Rotini, F.

    In a recent project the authors have proposed the adoption of Optimization Systems [1] as a bridging element between Computer-Aided Innovation (CAI) and PLM to identify geometrical contradictions [2], a particular case of the TRIZ physical contradiction [3]. A further development of the research [4] has revealed that the solutions obtained from several topological optimizations can be considered as elementary customized modeling features for a specific design task. The topology overcoming the arising geometrical contradiction can be obtained through a manipulation of the density distributions constituting the conflicting pair. Already two strategies of density combination have been identified as capable to solve geometrical contradictions and several others are under extended testing. The paper illustrates the most recent results of the ongoing research mainly related to the extension of the algorithms from 2D to 3D design spaces. The whole approach is clarified by means of two detailed examples, where the proposed technique is compared with classical multi-goal optimization.

  5. Data Collection for Mobile Group Consumption: An Asynchronous Distributed Approach.

    PubMed

    Zhu, Weiping; Chen, Weiran; Hu, Zhejie; Li, Zuoyou; Liang, Yue; Chen, Jiaojiao

    2016-01-01

    Mobile group consumption refers to consumption by a group of people, such as a couple, a family, colleagues and friends, based on mobile communications. It differs from consumption only involving individuals, because of the complex relations among group members. Existing data collection systems for mobile group consumption are centralized, which has the disadvantages of being a performance bottleneck, having single-point failure and increasing business and security risks. Moreover, these data collection systems are based on a synchronized clock, which is often unrealistic because of hardware constraints, privacy concerns or synchronization cost. In this paper, we propose the first asynchronous distributed approach to collecting data generated by mobile group consumption. We formally built a system model thereof based on asynchronous distributed communication. We then designed a simulation system for the model for which we propose a three-layer solution framework. After that, we describe how to detect the causality relation of two/three gathering events that happened in the system based on the collected data. Various definitions of causality relations based on asynchronous distributed communication are supported. Extensive simulation results show that the proposed approach is effective for data collection relating to mobile group consumption. PMID:27058544

  6. Determination and optimization of spatial samples for distributed measurements.

    SciTech Connect

    Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  7. Selecting radiotherapy dose distributions by means of constrained optimization problems.

    PubMed

    Alfonso, J C L; Buttazzo, G; García-Archilla, B; Herrero, M A; Núñez, L

    2014-05-01

    The main steps in planning radiotherapy consist in selecting for any patient diagnosed with a solid tumor (i) a prescribed radiation dose on the tumor, (ii) bounds on the radiation side effects on nearby organs at risk and (iii) a fractionation scheme specifying the number and frequency of therapeutic sessions during treatment. The goal of any radiotherapy treatment is to deliver on the tumor a radiation dose as close as possible to that selected in (i), while at the same time conforming to the constraints prescribed in (ii). To this day, considerable uncertainties remain concerning the best manner in which such issues should be addressed. In particular, the choice of a prescription radiation dose is mostly based on clinical experience accumulated on the particular type of tumor considered, without any direct reference to quantitative radiobiological assessment. Interestingly, mathematical models for the effect of radiation on biological matter have existed for quite some time, and are widely acknowledged by clinicians. However, the difficulty to obtain accurate in vivo measurements of the radiobiological parameters involved has severely restricted their direct application in current clinical practice.In this work, we first propose a mathematical model to select radiation dose distributions as solutions (minimizers) of suitable variational problems, under the assumption that key radiobiological parameters for tumors and organs at risk involved are known. Second, by analyzing the dependence of such solutions on the parameters involved, we then discuss the manner in which the use of those minimizers can improve current decision-making processes to select clinical dosimetries when (as is generally the case) only partial information on model radiosensitivity parameters is available. A comparison of the proposed radiation dose distributions with those actually delivered in a number of clinical cases strongly suggests that solutions of our mathematical model can be

  8. High-power CSI-fed induction motor drive with optimal power distribution based control

    NASA Astrophysics Data System (ADS)

    Kwak, S.-S.

    2011-11-01

    In this article, a current source inverter (CSI) fed induction motor drive with an optimal power distribution control is proposed for high-power applications. The CSI-fed drive is configured with a six-step CSI along with a pulsewidth modulated voltage source inverter (PWM-VSI) and capacitors. Due to the PWM-VSI and the capacitor, sinusoidal motor currents and voltages with high quality as well as natural commutation of the six-step CSI can be obtained. Since this CSI-fed drive can deliver required output power through both the six-step CSI and PWM-VSI, this article shows that the kVA ratings of both the inverters can be reduced by proper real power distribution. The optimal power distribution under load requirements, based on power flow modelling of the CSI-fed drive, is proposed to not only minimise the PWM-VSI rating but also reduce the six-step CSI rating. The dc-link current control of the six-step CSI is developed to realise the optimal power distribution. Furthermore, a vector controlled drive for high-power induction motors is proposed based on the optimal power distribution. Experimental results verify the high-power CSI-fed drive with the optimal power distribution control.

  9. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  10. A maximum likelihood approach to jointly estimating seasonal and annual flood frequency distributions

    NASA Astrophysics Data System (ADS)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Blöschl, G.

    2012-04-01

    Flood frequency analysis is often used by practitioners to support the design of river engineering works, flood miti- gation procedures and civil protection strategies. It is often carried out at annual time scale, by fitting observations of annual maximum peak flows. However, in many cases one is also interested in inferring the flood frequency distribution for given intra-annual periods, for instance when one needs to estimate the risk of flood in different seasons. Such information is needed, for instance, when planning the schedule of river engineering works whose building area is in close proximity to the river bed for several months. A key issue in seasonal flood frequency analysis is to ensure the compatibility between intra-annual and annual flood probability distributions. We propose an approach to jointly estimate the parameters of seasonal and annual probability distribution of floods. The approach is based on the preliminary identification of an optimal number of seasons within the year,which is carried out by analysing the timing of flood flows. Then, parameters of intra-annual and annual flood distributions are jointly estimated by using (a) an approximate optimisation technique and (b) a formal maximum likelihood approach. The proposed methodology is applied to some case studies for which extended hydrological information is available at annual and seasonal scale.

  11. An optimized web-based approach for collaborative stereoscopic medical visualization

    PubMed Central

    Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C

    2013-01-01

    Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three

  12. A Distributed Flocking Approach for Information Stream Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  13. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  14. Reliability Performance Optimization of Meshed Electrical Distribution System Considering Customer and Energy based Reliability Indices

    NASA Astrophysics Data System (ADS)

    Arya, L. D.; Kela, K. B.

    2013-12-01

    This paper describes a methodology for determination of optimum failure rate and repair time for each component of a meshed distribution system. In this paper the reliability indices for a sample meshed network are optimized. An objective function incorporating customer and energy based reliability indices and their target values is formulated. These indices are function of failure rate and repair time of a section of a distribution network. Modification of failure rate and repair time modifies the cost attached to them. Hence the optimization of the objective function is achieved by modifying the failure rate and repair time of each section of the meshed distribution system accounting constraint on budget allocated. The problem has been solved using population based differential evolution and bare bones particle swarm optimization techniques and results have been compared for a sample meshed distribution system.

  15. An effective approach to optimizing the parameters of complex thermal power plants

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Zharkov, P. V.; Epishkin, N. O.

    2016-03-01

    A new approach has been developed to solve the optimization problems of continuous parameters of thermal power plants. It is based on such organization of optimization, in which the solution of the system of equations describing thermal power plant, is achieved only at the endpoint of the optimization process. By the example of optimizing the parameters of a coal power unit for ultra-supercritical steam parameters, the efficiency of the proposed approach is demonstrated and compared with the previously used one, in which the system of equations was solved at each iteration of the optimization process.

  16. A Study on Machine Maintenance Scheduling Using Distributed Cooperative Approach

    NASA Astrophysics Data System (ADS)

    Tsujibe, Akihisa; Kaihara, Toshiya; Fujii, Nobutada; Nonaka, Youichi

    In this study, we propose a distributed cooperative scheduling method, and apply the method into a machine maintenance scheduling problem in re-entrant production systems. As one of the distributed cooperative scheduling methods, we focus on Lagrangian decomposition and coordination (LDC) method, and formulate the machine maintenance scheduling problem with LDC so as to improve computational efficiency by decomposing an original scheduling problem into several sub-problems. The derived solutions by solving the decomposed dual problem are converted into feasible solutions with a heuristic procedure applied in this study. The proposed approach regards maintenance as job with starting and finishing time constraints, so that product and maintenance schedule can realize proper maintenance operations without losing productivity. We show the effectiveness of the proposed method in several simulation experiments.

  17. A distributed approach to alarm management in chronic kidney disease.

    PubMed

    Estudillo-Valderrama, Miguel A; Talaminos-Barroso, Alejandro; Roa, Laura M; Naranjo-Hernández, David; Reina-Tosina, Javier; Aresté-Fosalba, Nuria; Milán-Martín, José A

    2014-11-01

    This paper presents the feasibility study of using a distributed approach for the management of alarms from chronic kidney disease patients. In a first place, the key issues regarding alarm definition, classification, and prioritization according to available normalization efforts are analyzed for the main scenarios addressed in hemodialysis. Then, the middleware proposed for alarm management is described, which follows the publish/subscribe pattern, and supports the Object Management Group data distribution service (DDS) standard. This standard facilitates the real-time monitoring of the exchanged information, as well as the scalability and interoperability of the solution developed regarding the different stakeholders and resources involved. Finally, the results section shows, through the proof of concept studied, the viability of DDS for the activation of emergency protocols in terms of alarm prioritization and personalization, as well as some remarks about security, privacy, and real-time communication performance. PMID:25014977

  18. TH-C-BRD-10: An Evaluation of Three Robust Optimization Approaches in IMPT Treatment Planning

    SciTech Connect

    Cao, W; Randeniya, S; Mohan, R; Zaghian, M; Kardar, L; Lim, G; Liu, W

    2014-06-15

    Purpose: Various robust optimization approaches have been proposed to ensure the robustness of intensity modulated proton therapy (IMPT) in the face of uncertainty. In this study, we aim to investigate the performance of three classes of robust optimization approaches regarding plan optimality and robustness. Methods: Three robust optimization models were implemented in our in-house IMPT treatment planning system: 1) L2 optimization based on worst-case dose; 2) L2 optimization based on minmax objective; and 3) L1 optimization with constraints on all uncertain doses. The first model was solved by a L-BFGS algorithm; the second was solved by a gradient projection algorithm; and the third was solved by an interior point method. One nominal scenario and eight maximum uncertainty scenarios (proton range over and under 3.5%, and setup error of 5 mm for x, y, z directions) were considered in optimization. Dosimetric measurements of optimized plans from the three approaches were compared for four prostate cancer patients retrospectively selected at our institution. Results: For the nominal scenario, all three optimization approaches yielded the same coverage to the clinical treatment volume (CTV) and the L2 worst-case approach demonstrated better rectum and bladder sparing than others. For the uncertainty scenarios, the L1 approach resulted in the most robust CTV coverage against uncertainties, while the plans from L2 worst-case were less robust than others. In addition, we observed that the number of scanning spots with positive MUs from the L2 approaches was approximately twice as many as that from the L1 approach. This indicates that L1 optimization may lead to more efficient IMPT delivery. Conclusion: Our study indicated that the L1 approach best conserved the target coverage in the face of uncertainty but its resulting OAR sparing was slightly inferior to other two approaches.

  19. Facility optimization to improve activation rate distributions during IVNAA

    PubMed Central

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-01-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body. PMID:23386375

  20. Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration

    NASA Astrophysics Data System (ADS)

    Shafii, M.; de Smedt, F.

    2009-04-01

    Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.

  1. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  2. Parameter identification of a distributed runoff model by the optimization software Colleo

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Anai, Hirokazu; Iwami, Yoichi

    2015-04-01

    The introduction of Colleo (Collection of Optimization software) is presented and case studies of parameter identification for a distributed runoff model are illustrated. In order to calculate discharge of rivers accurately, a distributed runoff model becomes widely used to take into account various land usage, soil-type and rainfall distribution. Feasibility study of parameter optimization is desired to be done in two steps. The first step is to survey which optimization algorithms are suitable for the problems of interests. The second step is to investigate the performance of the specific optimization algorithm. Most of the previous studies seem to focus on the second step. This study will focus on the first step and complement the previous studies. Many optimization algorithms have been proposed in the computational science field and a large number of optimization software have been developed and opened to the public with practically applicable performance and quality. It is well known that it is important to use suitable algorithms for the problems to obtain good optimization results efficiently. In order to achieve algorithm comparison readily, optimization software is needed with which performance of many algorithms can be compared and can be connected to various simulation software. Colleo is developed to satisfy such needs. Colleo provides a unified user interface to several optimization software such as pyOpt, NLopt, inspyred and R and helps investigate the suitability of optimization algorithms. 74 different implementations of optimization algorithms, Nelder-Mead, Particle Swarm Optimization and Genetic Algorithm, are available with Colleo. The effectiveness of Colleo was demonstrated with the cases of flood events of the Gokase River basin in Japan (1820km2). From 2002 to 2010, there were 15 flood events, in which the discharge exceeded 1000m3/s. The discharge was calculated with the PWRI distributed hydrological model developed by ICHARM. The target

  3. A Hierarchical Approach to Distributed Parameter Estimation in Rainfall-Runoff Modeling

    NASA Astrophysics Data System (ADS)

    Chu, W.; Gao, X.; Sorooshian, S.

    2007-12-01

    Distributed rainfall-runoff models intend to account for the heterogeneous characteristics of rainfall distributions and runoff generations thereby, improve the river forecast. In this study, a distributed river forecast model is built on the hierarchy of sub-basins connected through a river-routing system. These hydrologic units (sub-basins) possess a no-flux boundary and traditionally can be simulated by conceptual models with a limited number of parameters. However, calibration is needed to make such a model perform well. In the case of distributed modeling, the lack of streamflow observations inside a river system poses a challenge to estimate the model parameters at sub-basin scales. A hierarchical approach is proposed as follows: First, the study basin (a parent basin) is modeled in lumped mode and calibrated to obtain the optimized parameters. In the next step, the parent basin is divided into three sub-basins (children basins). The same model (with tripled parameters) is applied to the sub-basins driven by the rainfalls over the sub-basins and the model parameters for each sun-basin are calibrated using the parent parameters as their prior values. After obtaining the optimal parameters for the sub-basins, the hydrograph at the outlet of each sub-basin can be generated. Finally, by repeating the similar procedure, each sub-basin can be taken as a parent basin and obtaining the parameters for its children sub-basins. Applying this method to one of the DIMP-2 test basins: the Illinois River basin at south of Siloam Spring, the results show that (1) the streamflow results are improved by using the distributed rainfall and distributed parameters in comparing with the lumped simulation results, and (2) taking the parent basin's parameters as the priors can help to determine reasonable searching ranges when optimizing the parameters of children basins and also reduce the chance of resulting in an optimum which is not physically plausible. Applying this method to

  4. A Simultaneous Approach to Optimizing Treatment Assignments with Mastery Scores. Research Report 89-5.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    An approach to simultaneous optimization of assignments of subjects to treatments followed by an end-of-mastery test is presented using the framework of Bayesian decision theory. Focus is on demonstrating how rules for the simultaneous optimization of sequences of decisions can be found. The main advantages of the simultaneous approach, compared…

  5. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    PubMed Central

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  6. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  7. An inverse dynamics approach to trajectory optimization for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.

  8. Successive equimarginal approach for optimal design of a pump and treat system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.

    2007-08-01

    An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.

  9. A non linear multiple regression approach for inferring the probability distribution of hydrological model errors

    NASA Astrophysics Data System (ADS)

    Montanari, A.

    2006-12-01

    This contribution introduces a statistically based approach for uncertainty assessment in hydrological modeling, in an optimality context. Indeed, in several real world applications, there is the need for the user to select a model that is deemed to be the best possible choice accordingly to a given goodness of fit criteria. In this case, it is extremely important to assess the model uncertainty, intended as the range around the model output within which the measured hydrological variable is expected to fall with a given probability. This indication allows the user to quantify the risk associated to a decision that is based on the model response. The technique proposed here is carried out by inferring the probability distribution of the hydrological model error through a non linear multiple regression approach, depending on an arbitrary number of selected conditioning variables. These may include the current and previous model output as well as internal state variables of the model. The purpose is to indirectly relate the model error to the sources of uncertainty, through the conditioning variables. The method can be applied to any model of arbitrary complexity, included distributed approaches. The probability distribution of the model error is derived in the Gaussian space, through a meta-Gaussian approach. The normal quantile transform is applied in order to make the marginal probability distribution of the model error and the conditioning variables Gaussian. Then the above marginal probability distributions are related through the multivariate Gaussian distribution, whose parameters are estimated via multiple regression. Application of the inverse of the normal quantile transform allows the user to derive the confidence limits of the model output for an assigned significance level. The proposed technique is valid under statistical assumptions, that are essentially those conditioning the validity of the multiple regression in the Gaussian space. Statistical tests

  10. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2015-04-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  11. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  12. Traffic network and distribution of cars: Maximum-entropy approach

    SciTech Connect

    Das, N.C.; Chakrabarti, C.G.; Mazumder, S.K.

    2000-02-01

    An urban transport system plays a vital role in the modeling of the modern cosmopolis. A great emphasis is needed for the proper development of a transport system, particularly the traffic network and flow, to meet possible future demand. There are various mathematical models of traffic network and flow. The role of Shannon entropy in the modeling of traffic network and flow was stressed by Tomlin and Tomlin (1968) and Tomlin (1969). In the present note the authors study the role of maximum-entropy principle in the solution of an important problem associated with the traffic network flow. The maximum-entropy principle initiated by Jaynes is a powerful optimization technique of determining the distribution of a random system in the case of partial or incomplete information or data available about the system. This principle has now been broadened and extended and has found wide applications in different fields of science and technology. In the present note the authors show how the Jaynes' maximum-entropy principle, slightly modified, can be successfully applied in determining the flow or distribution of cars in different paths of a traffic network when incomplete information is available about the network.

  13. RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE

    SciTech Connect

    Matthews, Daniel J.; Newman, Jeffrey A. E-mail: janewman@pitt.ed

    2010-09-20

    Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to {approx}0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alone Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that

  14. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  15. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  16. A modal approach to modeling spatially distributed vibration energy dissipation.

    SciTech Connect

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  17. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the optimization of the lift distribution across the wing of an aircraft in formation flight. The usual elliptical distribution is no longer optimal for the trailing wing in the formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  18. Optimal investment and scheduling of distributed energy resources with uncertainty in electric vehicles driving schedules

    SciTech Connect

    Cardoso, Goncalo; Stadler, Michael; Bozchalui, Mohammed C.; Sharma, Ratnesh; Marnay, Chris; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-12-06

    The large scale penetration of electric vehicles (EVs) will introduce technical challenges to the distribution grid, but also carries the potential for vehicle-to-grid services. Namely, if available in large enough numbers, EVs can be used as a distributed energy resource (DER) and their presence can influence optimal DER investment and scheduling decisions in microgrids. In this work, a novel EV fleet aggregator model is introduced in a stochastic formulation of DER-CAM [1], an optimization tool used to address DER investment and scheduling problems. This is used to assess the impact of EV interconnections on optimal DER solutions considering uncertainty in EV driving schedules. Optimization results indicate that EVs can have a significant impact on DER investments, particularly if considering short payback periods. Furthermore, results suggest that uncertainty in driving schedules carries little significance to total energy costs, which is corroborated by results obtained using the stochastic formulation of the problem.

  19. Tomographic approach to resolving the distribution of LISA Galactic binaries

    SciTech Connect

    Mohanty, Soumya D.; Nayak, Rajesh K.

    2006-04-15

    The space based gravitational wave detector LISA (Laser Interferometer Space Antenna) is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10{sup -4} to 10{sup -3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude--the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only Doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population in frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta-function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.

  20. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  1. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  2. Optimal cloning of qubits given by an arbitrary axisymmetric distribution on the Bloch sphere

    SciTech Connect

    Bartkiewicz, Karol; Miranowicz, Adam

    2010-10-15

    We find an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector with the highest fidelity. The process is referred to as phase-independent cloning in contrast to the standard phase-covariant cloning for which an input qubit state is a priori better known. We assume that the information about the input state is encoded in an arbitrary axisymmetric distribution (phase function) on the Bloch sphere of the cloned qubits. We find analytical expressions describing the optimal cloning transformation and fidelity of the clones. As an illustration, we analyze cloning of qubit state described by the von Mises-Fisher and Brosseau distributions. Moreover, we show that the optimal phase-independent cloning machine can be implemented by modifying the mirror phase-covariant cloning machine for which quantum circuits are known.

  3. The adaptive approach for storage assignment by mining data of warehouse management system for distribution centres

    NASA Astrophysics Data System (ADS)

    Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen

    2011-05-01

    Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.

  4. RePAMO: Recursive Perturbation Approach for Multimodal Optimization

    NASA Astrophysics Data System (ADS)

    Dasgupta, Bhaskar; Divya, Kotha; Mehta, Vivek Kumar; Deb, Kalyanmoy

    2013-09-01

    In this article, a strategy is presented to exploit classical algorithms for multimodal optimization problems, which recursively applies any suitable local optimization method, in the present case Nelder and Mead's simplex search method, in the search domain. The proposed method follows a systematic way to restart the algorithm. The idea of climbing the hills and sliding down to the neighbouring valleys is utilized. The implementation of the algorithm finds local minima as well as maxima. The concept of perturbing the minimum/maximum in several directions and restarting the algorithm for maxima/minima is introduced. The method performs favourably in comparison to other global optimization methods. The results of this algorithm, named RePAMO, are compared with the GA-clearing and ASMAGO techniques in terms of the number of function evaluations. Based on the results, it has been found that the RePAMO outperforms GA clearing and ASMAGO by a significant margin.

  5. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Technical Reports Server (NTRS)

    Schuster, David M.

    1993-01-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  6. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Astrophysics Data System (ADS)

    Schuster, David M.

    1993-04-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  7. IPIP: A new approach to inverse planning for HDR brachytherapy by directly optimizing dosimetric indices

    SciTech Connect

    Siauw, Timmy; Cunha, Adam; Atamtuerk, Alper; Hsu, I-Chow; Pouliot, Jean; Goldberg, Ken

    2011-07-15

    Purpose: Many planning methods for high dose rate (HDR) brachytherapy require an iterative approach. A set of computational parameters are hypothesized that will give a dose plan that meets dosimetric criteria. A dose plan is computed using these parameters, and if any dosimetric criteria are not met, the process is iterated until a suitable dose plan is found. In this way, the dose distribution is controlled by abstract parameters. The purpose of this study is to develop a new approach for HDR brachytherapy by directly optimizing the dose distribution based on dosimetric criteria. Methods: The authors developed inverse planning by integer program (IPIP), an optimization model for computing HDR brachytherapy dose plans and a fast heuristic for it. They used their heuristic to compute dose plans for 20 anonymized prostate cancer image data sets from patients previously treated at their clinic database. Dosimetry was evaluated and compared to dosimetric criteria. Results: Dose plans computed from IPIP satisfied all given dosimetric criteria for the target and healthy tissue after a single iteration. The average target coverage was 95%. The average computation time for IPIP was 30.1 s on an Intel(R) Core{sup TM}2 Duo CPU 1.67 GHz processor with 3 Gib RAM. Conclusions: IPIP is an HDR brachytherapy planning system that directly incorporates dosimetric criteria. The authors have demonstrated that IPIP has clinically acceptable performance for the prostate cases and dosimetric criteria used in this study, in both dosimetry and runtime. Further study is required to determine if IPIP performs well for a more general group of patients and dosimetric criteria, including other cancer sites such as GYN.

  8. Metamodeling and the Critic-based approach to multi-level optimization.

    PubMed

    Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J

    2012-08-01

    Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach. PMID:22386785

  9. Economic consideration of optimal vaccination distribution for epidemic Spreads in complex networks

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Suzuki, Hideyuki; Aihara, Kazuyuki

    2013-02-01

    The main concern of epidemiological modeling is to implement an economical vaccine allocation to the population. Here, we investigate the optimal vaccination allocation in complex networks. We find that the optimal vaccine coverage depends not only on the relative cost of treatment to vaccination but also on the vaccine efficacy. Especially with a high cost of treatment, nodes with high degree are prioritized to vaccinate. These results may help us understand factors that may impact the optimal vaccination distribution in the control of epidemic dynamics.

  10. Optimal probabilistic cloning of two linearly independent states with arbitrary probability distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Rui, Pinshu; Zhang, Ziyun; Liao, Yanlin

    2016-02-01

    We investigate the probabilistic quantum cloning (PQC) of two states with arbitrary probability distribution. The optimal success probabilities are worked out for 1→ 2 PQC of the two states. The results show that the upper bound on the success probabilities of PQC in Qiu (J Phys A 35:6931-6937, 2002) cannot be reached in general. With the optimal success probabilities, we design simple forms of 1→ 2 PQC and work out the unitary transformation needed in the PQC processes. The optimal success probabilities for 1→ 2 PQC are also generalized to the M→ N PQC case.

  11. Using R for Global Optimization of a Fully-distributed Hydrologic Model at Continental Scale

    NASA Astrophysics Data System (ADS)

    Zambrano-Bigiarini, M.; Zajac, Z.; Salamon, P.

    2013-12-01

    Nowadays hydrologic model simulations are widely used to better understand hydrologic processes and to predict extreme events such as floods and droughts. In particular, the spatially distributed LISFLOOD model is currently used for flood forecasting at Pan-European scale, within the European Flood Awareness System (EFAS). Several model parameters can not be directly measured, and they need to be estimated through calibration, in order to constrain simulated discharges to their observed counterparts. In this work we describe how the free software 'R' has been used as a single environment to pre-process hydro-meteorological data, to carry out global optimization, and to post-process calibration results in Europe. Historical daily discharge records were pre-processed for 4062 stream gauges, with different amount and distribution of data in each one of them. The hydroTSM, raster and sp R packages were used to select ca. 700 stations with an adequate spatio-temporal coverage. Selected stations span a wide range of hydro-climatic characteristics, from arid and ET-dominated watersheds in the Iberian Peninsula to snow-dominated watersheds in Scandinavia. Nine parameters were selected to be calibrated based on previous expert knowledge. Customized R scripts were used to extract observed time series for each catchment and to prepare the input files required to fully set up the calibration thereof. The hydroPSO package was then used to carry out a single-objective global optimization on each selected catchment, by using the Standard Particle Swarm 2011 (SPSO-2011) algorithm. Among the many goodness-of-fit measures available in the hydroGOF package, the Nash-Sutcliffe efficiency was used to drive the optimization. User-defined functions were developed for reading model outputs and passing them to the calibration engine. The long computational time required to finish the calibration at continental scale was partially alleviated by using 4 multi-core machines (with both GNU

  12. Rapid Optimal SPH Particle Distributions in Spherical Geometries for Creating Astrophysical Initial Conditions

    NASA Astrophysics Data System (ADS)

    Raskin, Cody; Owen, J. Michael

    2016-04-01

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such as planets with core-mantle boundaries.

  13. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  14. A simulation-optimisation approach for designing water distribution networks under multiple objectives

    NASA Astrophysics Data System (ADS)

    Grundmann, Jens; Pham Van, Tinh; Müller, Ruben; Schütze, Niels

    2014-05-01

    Especially in arid and semi-arid regions, water distribution networks are of major importance for an integrated water resources management in order to convey water over long distances from sources to consumers. However, to design a network optimally is still a challenge which requires an appropriate determination of: (1) pipe/pump/tank characteristics - decision variables (2) cost/network reliability - objective functions including (3) a given set of constraints. Thereby, objective functions are contradicting, which means that by minimising costs network reliability is decreasing resulting in a higher risk of network failures. For solving this multi-objective design problem, a simulation-optimisation approach is developed. The approach couples a hydraulic network model (Epanet) with an optimiser, namely the covariance matrix adaptation evolution strategy (CMAES). The simulation-optimisation model is applied on international published benchmark cases for single and multi-objective optimisation and simultaneous optimisation of above mentioned decision variables as well as network layout. Results are encouraging. The proposed model performs with similar or better results, which means smaller costs and higher network reliability. Subsequently, the new model is applied for an optimal design and operation of a water distribution system to supply the coastal arid region of Al-Batinah (North of Oman) with water for agricultural production.

  15. Innovative Meta-Heuristic Approach Application for Parameter Estimation of Probability Distribution Model

    NASA Astrophysics Data System (ADS)

    Lee, T. S.; Yoon, S.; Jeong, C.

    2012-12-01

    The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the

  16. Distributed and/or grid-oriented approach to BTeV data analysis

    SciTech Connect

    Joel N. Butler

    2002-12-23

    The BTeV collaboration will record approximately 2 petabytes of raw data per year. It plans to analyze this data using the distributed resources of the collaboration as well as dedicated resources, primarily residing in the very large BTeV trigger farm, and resources accessible through the developing world-wide data grid. The data analysis system is being designed from the very start with this approach in mind. In particular, we plan a fully disk-based data storage system with multiple copies of the data distributed across the collaboration to provide redundancy and to optimize access. We will also position ourself to take maximum advantage of shared systems, as well as dedicated systems, at our collaborating institutions.

  17. A majorization-minimization approach to design of power distribution networks

    SciTech Connect

    Johnson, Jason K; Chertkov, Michael

    2010-01-01

    We consider optimization approaches to design cost-effective electrical networks for power distribution. This involves a trade-off between minimizing the power loss due to resistive heating of the lines and minimizing the construction cost (modeled by a linear cost in the number of lines plus a linear cost on the conductance of each line). We begin with a convex optimization method based on the paper 'Minimizing Effective Resistance of a Graph' [Ghosh, Boyd & Saberi]. However, this does not address the Alternating Current (AC) realm and the combinatorial aspect of adding/removing lines of the network. Hence, we consider a non-convex continuation method that imposes a concave cost of the conductance of each line thereby favoring sparser solutions. By varying a parameter of this penalty we extrapolate from the convex problem (with non-sparse solutions) to the combinatorial problem (with sparse solutions). This is used as a heuristic to find good solutions (local minima) of the non-convex problem. To perform the necessary non-convex optimization steps, we use the majorization-minimization algorithm that performs a sequence of convex optimizations obtained by iteratively linearizing the concave part of the objective. A number of examples are presented which suggest that the overall method is a good heuristic for network design. We also consider how to obtain sparse networks that are still robust against failures of lines and/or generators.

  18. A Simulation of Optimal Foraging: The Nuts and Bolts Approach.

    ERIC Educational Resources Information Center

    Thomson, James D.

    1980-01-01

    Presents a mechanical model for an ecology laboratory that introduces the concept of optimal foraging theory. Describes the physical model which includes a board studded with protruding machine bolts that simulate prey, and blindfolded students who simulate either generalist or specialist predator types. Discusses the theoretical model and data…

  19. A new integrated approach to seismic network optimization

    NASA Astrophysics Data System (ADS)

    Tramelli, A.; De Natale, G.; Troise, C.; Orazi, M.

    2012-04-01

    A seismic network is usually deployed to monitor the seismicity, to locate earthquakes and compute source parameters. The network configuration is crucial due to the important implications on the quality of the information that can be obtained, therefore, it requires a detailed study in order to maximize the information-to-cost ratio. Fundamental, for the network optimization, is the clear definition of the goals which must be reached, the experimental constraints and the physical relationship between data and model. In order to maximize the performance of a particular design a quantitative measure of such performance must be defined. Once a quality function has been rigorously defined for each individual goal, an optimization criterion can be defined, which maximizes it. In particular, for the seismic location problem such criterion may be based on the minimization of the statistical location errors. A similar criterion of error minimization can be equivalently used for moment tensor determination, double-couple focal mechanisms estimation, scalar source parameters determination, etc. We present here suitable algorithms developed and tested for network optimization. As optimization parameter, we propose to use the ratio between the larger to the smaller eigenvalue of the information matrix. Such ratio is proportional to the ratio between solution and data errors, i.e. it represents the amplification factor which propagates data errors into the solution. The optimization problem tries to define, among a set of M possible sites, which are the N ones (with N

  20. Optimal Combination of Distributed Energy System in an Eco-Campusof Japan

    SciTech Connect

    Yang, Yongwen; Gao, Weijun; Zhou, Nan; Marnay, Chris

    2006-06-14

    In this study, referring to the Distributed Energy Resources Customer Adoption Model (DER-CAM) which was developed by the Ernest Orlando Lawrence Berkeley National Laboratory (LBNL), E-GAMS programmer is developed with a research of database of energy tariffs, DER (Distributed Energy Resources) technology cost and performance characteristics, and building energy consumption in Japan. E-GAMS is a tool designed to find the optimal combination of installed equipment and an idealized operating schedule to minimize a site's energy bills. In this research, by using E-GAMS, we present a tool to select the optimal combination of distributed energy system for an Ecological-Campus, Kitakyushu, Science and Research Park (KSRP). We discuss the effects of the combination of distributed energy technologies on the energy saving, economic efficiency and environmental benefits.

  1. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  2. Optimal Distribution of Biofuel Feedstocks within Marginal Land in the USA

    NASA Astrophysics Data System (ADS)

    Jaiswal, D.

    2015-12-01

    The United States can have 43 to 123 Mha of marginal land to grow second generation biofuel feedstocks. A physiological and biophysical model (BioCro) was run using 30 yr climate data (NARR) and SSURGO soil data for the conterminous United Stated to simulate growth of miscanthus, switchgrass, sugarcane, and short rotation coppice. Overlay analyses of the regional maps of predicted yields and marginal land suggest maximum availability of 0.33, 1.15, 1.13, and 1.89 PG year-1 of biomass from sugarcane, willow, switchgrass, and miscanthus, respectively. Optimal distribution of these four biofuel feedstocks within the marginal land in the USA can provide up to 2 PG year-1 of biomass for the production of second generation of biofuel without competing for crop land used for food production. This approach can potentially meet a significant fraction of liquid fuel demand in the USA and reduce greenhouse gas emission while ensuring that current crop land under food production is not used for growing biofuel feedstocks.

  3. Distributed and parallel approach for handle and perform huge datasets

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.

  4. Distributed parameter approach to the dynamics of complex biological processes

    SciTech Connect

    Lee, T.T.; Wang, F.Y.; Newell, R.B.

    1999-10-01

    Modeling and simulation of a complex biological process for the removal of nutrients (nitrogen and phosphorus) from municipal wastewater are addressed. The model developed in this work employs a distributed-parameter approach to describe the behavior of components within three different bioreaction zones and the behavior of sludge in the anaerobic zone and soluble phosphate in the aerobic zone in two experiments. Good results are achieved despite the apparent plant-model mismatch, such as uncertainties with the behavior of phosphorus-accumulating organisms. Validation of the proposed secondary-settler model shows that it is superior to two state-of-the-art models in terms of the sum of the square relative errors.

  5. A collaborative optimization approach to improve the design and deployment of satellite constellations

    NASA Astrophysics Data System (ADS)

    Budianto, Irene Arianti

    This thesis introduces a systematic, multivariable, multidisciplinary method for the conceptual design of satellite constellations. The system consisted of three separate, but coupled, contributing analyses. The configuration and orbit design module performed coverage analysis for different orbit parameters and constellation patterns. The spacecraft design tool estimated mass, power, and costs for the payload and spacecraft bus that satisfy the resolution and sensitivity requirements. The launch manifest model found the minimum launch cost strategy, to deploy the given constellation system to the specified orbit. Collaborative Optimization (CO) has been previously implemented successfully as a design architecture for large-scale, highly-constrained multidisciplinary optimization problems related to aircraft and space vehicle studies. It is a distributed design architecture that allows its subsystems flexibility with regards to computing platforms and programming environment and, as its name suggests, many opportunities for collaboration. It is thus well suited to a team-oriented design environment, such as found in the constellation design process, and was implemented in this research. Two problems were solved using the CO method related to the design and deployment of a space-based infrared system to provide early missile warning. Successful convergence of these problems proved the feasibility of the CO architecture for solving the satellite constellation design problem. Verification of the results was accomplished by also implementing a large All-at-Once (AAO) optimization. This study further demonstrated several advantages of this approach over the standard practice used for designing satellite constellation systems. The CO method explored the design space more systematically and more extensively, improved subsystem flexibility, and its formulation was more scalable to growth in problem complexity. However, the intensive computational requirement of this method

  6. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimization method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  7. Optimization of Comminution Circuit Throughput and Product Size Distribution by Simulation and Control

    SciTech Connect

    S.K. Kawatra; T.C. Eisele; T. Weldum; D. Larsen; R. Mariani; J. Pletka

    2005-07-01

    The goal of this project was to improve energy efficiency of industrial crushing and grinding operations (comminution). Mathematical models of the comminution process were used to study methods for optimizing the product size distribution, so that the amount of excessively fine material produced could be minimized. The goal was to save energy by reducing the amount of material that was ground below the target size, while simultaneously reducing the quantity of materials wasted as ''slimes'' that were too fine to be useful. Extensive plant sampling and mathematical modeling of the grinding circuits was carried out to determine how to correct this problem. The approaches taken included (1) Modeling of the circuit to determine process bottlenecks that restrict flowrates in one area while forcing other parts of the circuit to overgrind the material; (2) Modeling of hydrocyclones to determine the mechanisms responsible for retaining fine, high-density particles in the circuit until they are overground, and improving existing models to accurately account for this behavior; and (3) Evaluation of the potential of advanced technologies to improve comminution efficiency and produce sharper product size distributions with less overgrinding. The mathematical models were used to simulate novel circuits for minimizing overgrinding and increasing throughput, and it is estimated that a single plant grinding 15 million tons of ore per year saves up to 82.5 million kWhr/year, or 8.6 x 10{sup 11} BTU/year. Implementation of this technology in the midwestern iron ore industry, which grinds an estimated 150 million tons of ore annually to produce over 50 million tons of iron ore concentrate, would save an estimated 1 x 10{sup 13} BTU/year.

  8. Optimization of a point-focusing, distributed receiver solar thermal electric system

    NASA Technical Reports Server (NTRS)

    Pons, R. L.

    1979-01-01

    This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.

  9. Assessment of grid-friendly collective optimization framework for distributed energy resources

    SciTech Connect

    Pensini, Alessandro; Robinson, Matthew; Heine, Nicholas; Stadler, Michael; Mammoli, Andrea

    2015-11-04

    Distributed energy resources have the potential to provide services to facilities and buildings at lower cost and environmental impact in comparison to traditional electric-gridonly services. The reduced cost could result from a combination of higher system efficiency and exploitation of electricity tariff structures. Traditionally, electricity tariffs are designed to encourage the use of ‘off peak’ power and discourage the use of ‘onpeak’ power, although recent developments in renewable energy resources and distributed generation systems (such as their increasing levels of penetration and their increased controllability) are resulting in pressures to adopt tariffs of increasing complexity. Independently of the tariff structure, more or less sophisticated methods exist that allow distributed energy resources to take advantage of such tariffs, ranging from simple pre-planned schedules to Software-as-a-Service schedule optimization tools. However, as the penetration of distributed energy resources increases, there is an increasing chance of a ‘tragedy of the commons’ mechanism taking place, where taking advantage of tariffs for local benefit can ultimately result in degradation of service and higher energy costs for all. In this work, we use a scheduling optimization tool, in combination with a power distribution system simulator, to investigate techniques that could mitigate the deleterious effect of ‘selfish’ optimization, so that the high-penetration use of distributed energy resources to reduce operating costs remains advantageous while the quality of service and overall energy cost to the community is not affected.

  10. Estimation of design sea ice thickness with maximum entropy distribution by particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng

    2016-06-01

    The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.

  11. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    PubMed

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs. PMID:26316105

  12. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  13. A new approach to optimization-based defibrillation.

    PubMed

    Muzdeka, S; Barbieri, E

    2001-01-01

    The purpose of this paper is to develop a new model for optimal cardiac defibrillation, based on simultaneous minimization of energy consumption and defibrillation time requirements. In order to generate optimal defibrillation waveforms that will accomplish the objective stated above, one parameter rho has been introduced as a part of the performance measure to weigh the relative importance of time and energy. All the results of this theoretical study have been obtained for the proposed model, under the assumption that cardiac tissue can be represented by a simple parallel resistor-capacitor circuit. It is well known from modern control theory that the selection of a numerical value of the weight factor is the matter of subjective judgment of a designer. However, it has been shown that defining a cost function can help in selecting a value for rho. Some results of the mathematical development of the algorithm and computer simulations will be included in the paper. PMID:11347410

  14. A free boundary approach to shape optimization problems

    PubMed Central

    Bucur, D.; Velichkov, B.

    2015-01-01

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  15. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  16. Electron energy distribution in a dusty plasma: analytical approach.

    PubMed

    Denysenko, I B; Kersten, H; Azarenkov, N A

    2015-09-01

    Analytical expressions describing the electron energy distribution function (EEDF) in a dusty plasma are obtained from the homogeneous Boltzmann equation for electrons. The expressions are derived neglecting electron-electron collisions, as well as transformation of high-energy electrons into low-energy electrons at inelastic electron-atom collisions. At large electron energies, the quasiclassical approach for calculation of the EEDF is applied. For the moderate energies, we account for inelastic electron-atom collisions in the dust-free case and both inelastic electron-atom and electron-dust collisions in the dusty plasma case. Using these analytical expressions and the balance equation for dust charging, the electron energy distribution function, the effective electron temperature, the dust charge, and the dust surface potential are obtained for different dust radii and densities, as well as for different electron densities and radio-frequency (rf) field amplitudes and frequencies. The dusty plasma parameters are compared with those calculated numerically by a finite-difference method taking into account electron-electron collisions and the transformation of high-energy electrons at inelastic electron-neutral collisions. It is shown that the analytical expressions can be used for calculation of the EEDF and dusty plasma parameters at typical experimental conditions, in particular, in the positive column of a direct-current glow discharge and in the case of an rf plasma maintained by an electric field with frequency f=13.56MHz. PMID:26465570

  17. Current Approaches for Improving Intratumoral Accumulation and Distribution of Nanomedicines

    PubMed Central

    Durymanov, Mikhail O; Rosenkranz, Andrey A; Sobolev, Alexander S

    2015-01-01

    The ability of nanoparticles and macromolecules to passively accumulate in solid tumors and enhance therapeutic effects in comparison with conventional anticancer agents has resulted in the development of various multifunctional nanomedicines including liposomes, polymeric micelles, and magnetic nanoparticles. Further modifications of these nanoparticles have improved their characteristics in terms of tumor selectivity, circulation time in blood, enhanced uptake by cancer cells, and sensitivity to tumor microenvironment. These “smart” systems have enabled highly effective delivery of drugs, genes, shRNA, radioisotopes, and other therapeutic molecules. However, the resulting therapeutically relevant local concentrations of anticancer agents are often insufficient to cause tumor regression and complete elimination. Poor perfusion of inner regions of solid tumors as well as vascular barrier, high interstitial fluid pressure, and dense intercellular matrix are the main intratumoral barriers that impair drug delivery and impede uniform distribution of nanomedicines throughout a tumor. Here we review existing methods and approaches for improving tumoral uptake and distribution of nano-scaled therapeutic particles and macromolecules (i.e. nanomedicines). Briefly, these strategies include tuning physicochemical characteristics of nanomedicines, modulating physiological state of tumors with physical impacts or physiologically active agents, and active delivery of nanomedicines using cellular hitchhiking. PMID:26155316

  18. A Distributed Trajectory-Oriented Approach to Managing Traffic Complexity

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Wing, David J.; Vivona, Robert; Garcia-Chico, Jose-Luis

    2007-01-01

    In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which ground-based service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. While its architecture becomes more distributed, the goal of the Air Traffic Management (ATM) system remains to achieve objectives such as maintaining safety and efficiency. It is, therefore, critical to design appropriate control elements to ensure that aircraft and groundbased actions result in achieving these objectives without unduly restricting user-preferred trajectories. This paper presents a trajectory-oriented approach containing two such elements. One is a trajectory flexibility preservation function, by which aircraft plan their trajectories to preserve flexibility to accommodate unforeseen events. And the other is a trajectory constraint minimization function by which ground-based agents, in collaboration with air-based agents, impose just-enough restrictions on trajectories to achieve ATM objectives, such as separation assurance and flow management. The underlying hypothesis is that preserving trajectory flexibility of each individual aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by minimizing constraints without jeopardizing the intended ATM objectives. The paper presents conceptually how the two functions operate in a distributed control architecture that includes self separation. The paper illustrates the concept through hypothetical scenarios involving conflict resolution and flow management. It presents a functional analysis of the interaction and information flow between the functions. It also presents an analytical framework for defining metrics and developing methods to preserve trajectory flexibility and

  19. Equilibrium Distribution of Subgrid Convection: A Grand Canonic Ensemble Approach

    NASA Astrophysics Data System (ADS)

    Bao, J.; Penland, M. C.

    2011-12-01

    Moist convection on scales smaller than the horizontal grid spacing that is commonly used in operational numerical weather and climate prediction models is turbulent and therefore its interaction with the environment is stochastic. Traditionally in operational weather and climate prediction models, the effect of unresolved subgrid convection on the prediction of resolved scales is parameterized deterministically as an ensemble mean, and the stochastic fluctuations about this ensemble mean are ignored. It has recently been advocated that the stochastic fluctuations should be properly accounted for in the subgrid parameterization in order to address a persistent issue in operational ensemble prediction: the spread of ensemble members tends to be underestimated. In this study, the probability of requiring n mutually independently convective plumes and a total cloud-base mass flux M for subgrid convection to occur in a given grid box is derived based on the concept of the grand canonical ensemble, which is well known in classic statistical mechanics. The probability distribution functions of the cloud-base mass flux and the number of subgrid convective plumes are dependent on the average of each of the two quantities. For a large number of such grid boxes in a given area, the concept can be extended to a homogenous stochastic situation. In this situation, the probability of finding exact k subgrid convective plumes in one of the grid boxes is given by the binomial distribution, which converges to the Poisson distribution when the number of the boxes approaches to infinity. The latter result provides an alternative way to derive and interpret the previous theoretical results obtained by Craig and Cohen (2006, JAS, Vol. 63, p. 1996-2015).

  20. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  1. Lifetime optimization of wireless sensor network by a better nodes positioning and energy distribution

    NASA Astrophysics Data System (ADS)

    Lebreton, J. M.; Murad, N. M.

    2014-10-01

    The purpose of this paper is to propose a method of energy distribution on a Wireless Sensor Network (WSN). Nodes are randomly positioned and the sink is placed at the centre of the surface. Simulations show that relay nodes around the sink are too much requested to convey data, which substantially reduces their lifetime. So, several algorithmic solutions are presented to optimize the energy distribution on each node, compared to the classical uniform energy distribution. Their performance is discussed in terms of failure rate of data transmission and network lifetime. Moreover, the total energy distributed on all nodes before the deployment is invariable and some non-uniform energy distributions are created. Finally, simulations show that every energy distributions greatly improve the WSN lifetime and decrease the failure rate of data transmission.

  2. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  3. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  4. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the ‘best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  5. RF cavity design exploiting a new derivative-free trust region optimization approach

    PubMed Central

    Hassan, Abdel-Karim S.O.; Abdel-Malek, Hany L.; Mohamed, Ahmed S.A.; Abuelfadl, Tamer M.; Elqenawy, Ahmed E.

    2014-01-01

    In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929

  6. RF cavity design exploiting a new derivative-free trust region optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Abdel-Malek, Hany L; Mohamed, Ahmed S A; Abuelfadl, Tamer M; Elqenawy, Ahmed E

    2015-11-01

    In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929

  7. Microcanonical thermostatistics analysis without histograms: Cumulative distribution and Bayesian approaches

    NASA Astrophysics Data System (ADS)

    Alves, Nelson A.; Morero, Lucas D.; Rizzi, Leandro G.

    2015-06-01

    Microcanonical thermostatistics analysis has become an important tool to reveal essential aspects of phase transitions in complex systems. An efficient way to estimate the microcanonical inverse temperature β(E) and the microcanonical entropy S(E) is achieved with the statistical temperature weighted histogram analysis method (ST-WHAM). The strength of this method lies on its flexibility, as it can be used to analyse data produced by algorithms with generalised sampling weights. However, for any sampling weight, ST-WHAM requires the calculation of derivatives of energy histograms H(E) , which leads to non-trivial and tedious binning tasks for models with continuous energy spectrum such as those for biomolecular and colloidal systems. Here, we discuss two alternative methods that avoid the need for such energy binning to obtain continuous estimates for H(E) in order to evaluate β(E) by using ST-WHAM: (i) a series expansion to estimate probability densities from the empirical cumulative distribution function (CDF), and (ii) a Bayesian approach to model this CDF. Comparison with a simple linear regression method is also carried out. The performance of these approaches is evaluated considering coarse-grained protein models for folding and peptide aggregation.

  8. A simple distributed sediment delivery approach for rural catchments

    NASA Astrophysics Data System (ADS)

    Reid, Lucas; Scherer, Ulrike

    2014-05-01

    The transfer of sediments from source areas to surface waters is a complex process. In process based erosion models sediment input is thus quantified by representing all relevant sub processes such as detachment, transport and deposition of sediment particles along the flow path to the river. A successful application of these models requires, however, a large amount of spatially highly resolved data on physical catchment characteristics, which is only available for a few, well examined small catchments. For the lack of appropriate models, the empirical Universal Soil Loss Equation (USLE) is widely applied to quantify the sediment production in meso to large scale basins. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). In these models, the SDR is related to data on morphological characteristics of the catchment such as average local relief, drainage density, proportion of depressions or soil texture. Some approaches include the relative distance between sediment source areas and the river channels. However, several studies showed that spatially lumped parameters describing the morphological characteristics are only of limited value to represent the factors of influence on sediment transport at the catchment scale. Sediment delivery is controlled by the location of the sediment source areas in the catchment and the morphology along the flow path to the surface water bodies. This complex interaction of spatially varied physiographic characteristics cannot be adequately represented by lumped morphological parameters. The objective of this study is to develop a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in a catchment. We selected a small catchment located in in an intensively cultivated loess region in Southwest Germany as study area for the development of the SDR approach. The

  9. A work stealing based approach for enabling scalable optimal sequence homology detection

    SciTech Connect

    Daily, Jeffrey A.; Kalyanaraman, Anantharaman; Krishnamoorthy, Sriram; Vishnu, Abhinav

    2015-05-01

    Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. Here, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for 2.56M sequences on up to 8K cores show parallel efficiencies of ~ 75-100%, a time-to-solution of 33s, and a rate of ~ 2.0M alignments per second.

  10. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show

  11. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be

  12. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  13. An optimized encoding method for secure key distribution by swapping quantum entanglement and its extension

    NASA Astrophysics Data System (ADS)

    Gao, Gan

    2015-08-01

    Song [Song D 2004 Phys. Rev. A 69 034301] first proposed two key distribution schemes with the symmetry feature. We find that, in the schemes, the private channels which Alice and Bob publicly announce the initial Bell state or the measurement result through are not needed in discovering keys, and Song’s encoding methods do not arrive at the optimization. Here, an optimized encoding method is given so that the efficiencies of Song’s schemes are improved by 7/3 times. Interestingly, this optimized encoding method can be extended to the key distribution scheme composed of generalized Bell states. Project supported by the National Natural Science Foundation of China (Grant No. 11205115), the Program for Academic Leader Reserve Candidates in Tongling University (Grant No. 2014tlxyxs30), and the 2014-year Program for Excellent Youth Talents in University of Anhui Province, China.

  14. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  15. A second law approach to exhaust system optimization

    SciTech Connect

    Primus, R.J.

    1984-01-01

    A model has been constructed that applies second law analysis to a Fanno formulation of the exhaust process of a turbocharged diesel engine. The model has been used to quantify available energy destruction at the valve and in the manifold and to study the influence of various system parameters on the relative magnitude of these exhaust system losses. The model formulation and its application to the optimization of the exhaust manifold diameter is discussed. Data are then presented which address the influence of the manifold friction, turbine efficiency, turbine power extraction, valve flow area, compression ratio, speed, load and air-fuel ratio on the available energy destruction in the exhaust system.

  16. LARES: an artificial chemical process approach for optimization.

    PubMed

    Irizarry, Roberto

    2004-01-01

    This article introduces a new global optimization procedure called LARES. LARES is based on the concept of an artificial chemical process (ACP), a new paradigm which is described in this article. The algorithm's performance was studied using a test bed with a wide spectrum of problems including random multi-modal random problem generators, random LSAT problem generators with various degrees of epistasis, and a test bed of real-valued functions with different degrees of multi-modality, discontinuity and flatness. In all cases studied, LARES performed very well in terms of robustness and efficiency. PMID:15768524

  17. A Residuals Approach to Filtering, Smoothing and Identification for Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1985-01-01

    An approach for state estimation and identification of spatially distributed parameters embedded in static distributed (elliptic) system models is advanced. The method of maximum likelihood is used to find parameter values that maximize a likelihood functional for the system model, or equivalently, that minimize the negative logarithm of this functional. To find the minimum, a Newton-Raphson search is conducted that from an initial estimate generates a convergent sequence of parameter estimates. For simplicity, a Gauss-Markov approach is used to approximate the Hessian in terms of products of first derivatives. The gradient and approximate Hessian are computed by first arranging the negative log likelihood functional into a form based on the square root factorization of the predicted covariance of the measurement process. The resulting data processing approach, referred to here by the new term of predicted data covariance square root filtering, makes the gradient and approximate Hessian calculations very simple. A closely related set of state estimates is also produced by the maximum likelihood method: smoothed estimates that are optimal in a conditional mean sense and filtered estimates that emerge from the predicted data covariance square root filter.

  18. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  19. High direct drive illumination uniformity achieved by multi-parameter optimization approach: a case study of Shenguang III laser facility.

    PubMed

    Tian, Chao; Chen, Jia; Zhang, Bo; Shan, Lianqiang; Zhou, Weimin; Liu, Dongxiao; Bi, Bi; Zhang, Feng; Wang, Weiwu; Zhang, Baohan; Gu, Yuqiu

    2015-05-01

    The uniformity of the compression driver is of fundamental importance for inertial confinement fusion (ICF). In this paper, the illumination uniformity on a spherical capsule during the initial imprinting phase directly driven by laser beams has been considered. We aim to explore methods to achieve high direct drive illumination uniformity on laser facilities designed for indirect drive ICF. There are many parameters that would affect the irradiation uniformity, such as Polar Direct Drive displacement quantity, capsule radius, laser spot size and intensity distribution within a laser beam. A novel approach to reduce the root mean square illumination non-uniformity based on multi-parameter optimizing approach (particle swarm optimization) is proposed, which enables us to obtain a set of optimal parameters over a large parameter space. Finally, this method is applied to improve the direct drive illumination uniformity provided by Shenguang III laser facility and the illumination non-uniformity is reduced from 5.62% to 0.23% for perfectly balanced beams. Moreover, beam errors (power imbalance and pointing error) are taken into account to provide a more practical solution and results show that this multi-parameter optimization approach is effective. PMID:25969321

  20. Analytical approach to cross-layer protocol optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in

  1. A stochastic optimization approach for integrated urban water resource planning.

    PubMed

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change. PMID:23552255

  2. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.

    1996-01-01

    The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.

  3. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries. PMID:27171499

  4. Characterizing and Optimizing Photocathode Laser Distributions for Ultra-low Emittance Electron Beam Operations

    SciTech Connect

    Zhou, F.; Bohler, D.; Ding, Y.; Gilevich, S.; Huang, Z.; Loos, H.; Ratner, D.; Vetter, S.

    2015-12-07

    Photocathode RF gun has been widely used for generation of high-brightness electron beams for many different applications. We found that the drive laser distributions in such RF guns play important roles in minimizing the electron beam emittance. Characterizing the laser distributions with measurable parameters and optimizing beam emittance versus the laser distribution parameters in both spatial and temporal directions are highly desired for high-brightness electron beam operation. In this paper, we report systematic measurements and simulations of emittance dependence on the measurable parameters represented for spatial and temporal laser distributions at the photocathode RF gun systems of Linac Coherent Light Source. The tolerable parameter ranges for photocathode drive laser distributions in both directions are presented for ultra-low emittance beam operations.

  5. Decentralized commanding and supervision: the distributed projective virtual reality approach

    NASA Astrophysics Data System (ADS)

    Rossmann, Juergen

    2000-10-01

    As part of the cooperation between the University of Souther California (USC) and the Institute of Robotics Research (IRF) of the University of Dortmund experiments regarding the control of robots over long distances by means of virtual reality based man machine interfaces have been successfully carried out. In this paper, the newly developed virtual reality system that is being used for the control of a multi-robot system for space applications as well as for the control and supervision of industrial robotics and automation applications is presented. The general aim of the development was to provide the framework for Projective Virtual Reality which allows users to project their actions in the virtual world into the real world primarily by means of robots but also by other means of automation. The framework is based on a new approach which builds on the task deduction capabilities of a newly developed virtual reality system and a task planning component. The advantage of this new approach is that robots which work at great distances from the control station can be controlled as easily and intuitively as robots that work right next to the control station. Robot control technology now provides the user in the virtual world with a prolonged arm into the physical environment, thus paving the way for a new quality of user-friendly man machine interfaces for automation applications. Lately, this work has been enhanced by a new structure that allows to distribute the virtual reality application over multiple computers. With this new step, it is now possible for multiple users to work together in the same virtual room, although they may physically be thousands of miles apart. They only need an Internet or ISDN connection to share this new experience. Last but not least, the distribution technology has been further developed to not just allow users to cooperate but to be able to run the virtual world on many synchronized PCs so that a panorama projection or even a cave can

  6. An Optimization-Based Approach to Injector Element Design

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)

    2000-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues

  7. Piece-wise mixed integer programming for optimal sizing of surge control devices in water distribution systems

    NASA Astrophysics Data System (ADS)

    Skulovich, Olya; Bent, Russell; Judi, David; Perelman, Lina Sela; Ostfeld, Avi

    2015-06-01

    Despite their potential catastrophic impact, transients are often ignored or presented ad hoc when designing water distribution systems. To address this problem, we introduce a new piece-wise function fitting model that is integrated with mixed integer programming to optimally place and size surge tanks for transient control. The key features of the algorithm are a model-driven discretization of the search space, a linear approximation nonsmooth system response surface to transients, and a mixed integer linear programming optimization. Results indicate that high quality solutions can be obtained within a reasonable number of function evaluations and demonstrate the computational effectiveness of the approach through two case studies. The work investigates one type of surge control devices (closed surge tank) for a specified set of transient events. The performance of the algorithm relies on the assumption that there exists a smooth relationship between the objective function and tank size. Results indicate the potential of the approach for the optimal surge control design in water systems.

  8. Particle Swarm Optimization Approach in a Consignment Inventory System

    NASA Astrophysics Data System (ADS)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  9. A simple approach to metal hydride alloy optimization

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  10. Optimizing Geographic Allotment of Photovoltaic Capacity in a Distributed Generation Setting: Preprint

    SciTech Connect

    Urquhart, B.; Sengupta, M.; Keller, J.

    2012-09-01

    A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.

  11. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  12. Optimal Diagnostic Approaches for Patients with Suspected Small Bowel Disease

    PubMed Central

    Kim, Jae Hyun; Moon, Won

    2016-01-01

    While the domain of gastrointestinal endoscopy has made great strides over the last several decades, endoscopic assessment of the small bowel continues to be challenging. Recently, with the development of new technology including video capsule endoscopy, device-assisted enteroscopy, and computed tomography/magnetic resonance enterography, a more thorough investigation of the small bowel is possible. In this article, we review the systematic approach for patients with suspected small bowel disease based on these advanced endoscopic and imaging systems. PMID:27334413

  13. Reviving oscillation with optimal spatial period of frequency distribution in coupled oscillators

    NASA Astrophysics Data System (ADS)

    Deng, Tongfa; Liu, Weiqing; Zhu, Yun; Xiao, Jinghua; Kurths, Jürgen

    2016-09-01

    The spatial distributions of system's frequencies have significant influences on the critical coupling strengths for amplitude death (AD) in coupled oscillators. We find that the left and right critical coupling strengths for AD have quite different relations to the increasing spatial period m of the frequency distribution in coupled oscillators. The left one has a negative linear relationship with m in log-log axis for small initial frequency mismatches while remains constant for large initial frequency mismatches. The right one is in quadratic function relation with spatial period m of the frequency distribution in log-log axis. There is an optimal spatial period m0 of frequency distribution with which the coupled system has a minimal critical strength to transit from an AD regime to reviving oscillation. Moreover, the optimal spatial period m0 of the frequency distribution is found to be related to the system size √{ N } . Numerical examples are explored to reveal the inner regimes of effects of the spatial frequency distribution on AD.

  14. Towards an Optimal Multi-Method Paleointensity Approach

    NASA Astrophysics Data System (ADS)

    de Groot, L. V.; Biggin, A. J.; Langereis, C. G.; Dekkers, M. J.

    2014-12-01

    Our recently proposed 'multi-method paleointensity approach' consists of at least IZZI-Thellier, MSP-DSC and pseudo-Thellier experiments, complemented with Microwave Thellier experiments for key flows or ages. All results are scrutinized by strict selection criteria to accept only the most reliable paleointensities. This approach yielded reliable estimates of the paleofield for ~70% of all cooling units sampled on Hawaii - an exceptionally high number for a paleointensity study on lavas. Furthermore the credibility of the obtained results is greatly enhanced if more methods mutually agree with in their experimental uncertainties. To further assess the success rate of this new approach, we applied it to two collections of (sub-)recent lavas from Tenerife and Gran Canaria (20 cooling units), and Terceira (Azores, 18 cooling units). Although the mineralogy and rock-magnetic properties of much of these flows seemed less favorable for paleointensity techniques compared to the Hawaiian samples, again the multi-method paleointensity approach yielded reliable estimates for 60-70% of all cooling units. One of the methods, the newly calibrated pseudo-Thellier method, proved to be an important element of our new paleointensity approach yielding reliable estimates for ~50% of the Hawaiian lavas sampled. Its applicability to other volcanic edifices, however, remained questionable. The results from the Canarian and Azorean volcanic edifices provide further constraints on this method's potential. For lavas that are rock-magnetically (i.e. susceptibility-vs-temperature behavior) akin to Hawaiian lavas, the same selection criterion and calibration formula yielded successful results - testifying to the veracity of this new paleointensity method. Besides methodological advances our new record for the Canary Islands also has geomagnetic implications. It reveals a dramatic increase in the intensity of the Earth's magnetic field from ~1250 to ~720 BC, reaching a maximum VADM of ~125 ZAm

  15. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  16. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Astrophysics Data System (ADS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  17. A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems

    NASA Astrophysics Data System (ADS)

    Ali, Hegagi M.; Pereira, Fernando Lobo; Gama, Sílvio M. A.

    2016-09-01

    In this paper, we discuss a new general formulation of fractional optimal control problems whose performance index is in the fractional integral form and the dynamics are given by a set of fractional differential equations in the Caputo sense. We use a new approach to prove necessary conditions of optimality in the form of Pontryagin maximum principle for fractional nonlinear optimal control problems. Moreover, a new method based on a generalization of the Mittag-Leffler function is used to solving this class of fractional optimal control problems. A simple example is provided to illustrate the effectiveness of our main result.

  18. Optimizing scheduling problem using an estimation of distribution algorithm and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Qun, Jiang; Yang, Ou; Dong, Shi-Du

    2007-12-01

    This paper presents a methodology for using heuristic search methods to optimize scheduling problem. Specifically, an Estimation of Distribution Algorithm (EDA)- Population Based Incremental Learning (PBIL), and Genetic Algorithm (GA) have been applied to finding effective arrangement of curriculum schedule of Universities. To our knowledge, EDAs have been applied to fewer real world problems compared to GAs, and the goal of the present paper is to expand the application domain of this technique. The experimental results indicate a good applicability of PBIL to optimize scheduling problem.

  19. Fault location of underground distribution network based on RBF network optimized by improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhao, Min

    2013-03-01

    To solve the difficult problem that exists in the location of single-phase ground fault for coal mine underground distribution network, a fault location method using RBF network optimized by improved PSO algorithm based on the mapping relationship between wavelet packet transform modulus maxima of specific frequency bands transient state zero sequence current in the fault line and fault point position is presented. The simulation analysis results in the cases of different transition resistances and fault distances show that the RBF network optimized by improved PSO algorithm can obtain accurate and reliable fault location results, and the fault location perfor- mance is better than traditional RBF network.

  20. Target point correction optimized based on the dose distribution of each fraction in daily IGRT

    NASA Astrophysics Data System (ADS)

    Stoll, Markus; Giske, Kristina; Stoiber, Eva M.; Schwarz, Michael; Bendl, Rolf

    2014-03-01

    Purpose: To use daily re-calculated dose distributions for optimization of target point corrections (TPCs) in image guided radiation therapy (IGRT). This aims to adapt fractioned intensity modulated radiation therapy (IMRT) to changes in the dose distribution induced by anatomical changes. Methods: Daily control images from an in-room on-rail spiral CT-Scanner of three head-and-neck cancer patients were analyzed. The dose distribution was re-calculated on each control CT after an initial TPC, found by a rigid image registration method. The clinical target volumes (CTVs) were transformed from the planning CT to the rigidly aligned control CTs using a deformable image registration method. If at least 95% of each transformed CTV was covered by the initially planned D95 value, the TPC was considered acceptable. Otherwise the TPC was iteratively altered to maximize the dose coverage of the CTVs. Results: In 14 (out of 59) fractions the criterion was already fulfilled after the initial TPC. In 10 fractions the TPC can be optimized to fulfill the coverage criterion. In 31 fractions the coverage can be increased but the criterion is not fulfilled. In another 4 fractions the coverage cannot be increased by the TPC optimization. Conclusions: The dose coverage criterion allows selection of patients who would benefit from replanning. Using the criterion to include daily re-calculated dose distributions in the TPC reduces the replanning rate in the analysed three patients from 76% to 59% compared to the rigid image registration TPC.

  1. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way. PMID:26497359

  2. Optimization Model of Water Distribution System Using Heuristic Algorithm to Sketch Vulnerability Map

    NASA Astrophysics Data System (ADS)

    Hsieh, Y.-C.; Tung, C.-P.

    2012-04-01

    With the development of society, more and more people move to a city, which results in heavily increase of clean water demand. The study area, Kaohsiung city, is the second largest city with many industrial activities in Taiwan, and rainfall concentrates extremely in the wet season. The ratio of rainfall for wet and dry season is 9:1, and thus water supply for agriculture, industry and increasing population becomes more and more difficult while this city develops further and especially when droughts happen. To solve above problem, a robust pipe network with well-designed water supply distribution system becomes much more important. The purpose of this research is to find the optimal solution for water distribution system, which decides the amount of water for all nodes in the network and results in the smallest amount of people affected by shortage of water when the water network faces different degree of droughts. In this research, EPANET2 is used to simulate water distribution for different drought conditions. On the other hand, EPANET2 simulation and GIS population information are combined to calculate how many people are affected for each node in the network; then we can know the total affected people in the whole city for each water distribution alternative. Finally, heuristic algorithm is applied to find the optimal solution for different degree of droughts. Furthermore, by comparing the optimal solutions, the water supply vulnerability map can be drawn for Kaohsiung city, which reveals the weaker part of Kaohsiung and should be strengthened first for future extreme climate. Keywords: Water Supply Distribution System, Heuristic Algorithm, EPANET2, Vulnerability Map, and Optimization Model

  3. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  4. Optimization or Simulation? Comparison of approaches to reservoir operation on the Senegal River

    NASA Astrophysics Data System (ADS)

    Raso, Luciano; Bader, Jean-Claude; Pouget, Jean-Christophe; Malaterre, Pierre-Olivier

    2015-04-01

    Design of reservoir operation rules follows, traditionally, two approaches: optimization and simulation. In simulation, the analyst hypothesizes operation rules, and selects them by what-if analysis based on effects of model simulations on different objectives indicators. In optimization, the analyst selects operational objective indicators, finding operation rules as an output. Optimization rules guarantee optimality, but they often require further model simplification, and can be hard to communicate. Selecting the most proper approach depends on the system under analysis, and the analyst expertise and objectives. We present advantage and disadvantages of both approaches, and we test them for the Manantali reservoir operation rule design, on the Senegal River, West Africa. We compare their performance in attaining the system objectives. Objective indicators are defined a-priori, in order to quantify the system performance. Results from this application are not universally generalizable to the entire class, but they allow us to draw conclusions on this system, and to give further information on their application.

  5. The multidisciplinary design optimization of a distributed propulsion blended-wing-body aircraft

    NASA Astrophysics Data System (ADS)

    Ko, Yan-Yee Andy

    The purpose of this study is to examine the multidisciplinary design optimization (MDO) of a distributed propulsion blended-wing-body (BWB) aircraft. The BWB is a hybrid shape resembling a flying wing, placing the payload in the inboard sections of the wing. The distributed propulsion concept involves replacing a small number of large engines with many smaller engines. The distributed propulsion concept considered here ducts part of the engine exhaust to exit out along the trailing edge of the wing. The distributed propulsion concept affects almost every aspect of the BWB design. Methods to model these effects and integrate them into an MDO framework were developed. The most important effect modeled is the impact on the propulsive efficiency. There has been conjecture that there will be an increase in propulsive efficiency when there is blowing out of the trailing edge of a wing. A mathematical formulation was derived to explain this. The formulation showed that the jet 'fills in' the wake behind the body, improving the overall aerodynamic/propulsion system, resulting in an increased propulsive efficiency. The distributed propulsion concept also replaces the conventional elevons with a vectored thrust system for longitudinal control. An extension of Spence's Jet Flap theory was developed to estimate the effects of this vectored thrust system on the aircraft longitudinal control. It was found to provide a reasonable estimate of the control capability of the aircraft. An MDO framework was developed, integrating all the distributed propulsion effects modeled. Using a gradient based optimization algorithm, the distributed propulsion BWB aircraft was optimized and compared with a similarly optimized conventional BWB design. Both designs are for an 800 passenger, 0.85 cruise Mach number and 7000 nmi mission. The MDO results found that the distributed propulsion BWB aircraft has a 4% takeoff gross weight and a 2% fuel weight. Both designs have similar planform shapes

  6. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization. PMID:26461069

  7. The 15-meter antenna performance optimization using an interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Grantham, William L.; Schroeder, Lyle C.; Bailey, Marion C.; Campbell, Thomas G.

    1988-01-01

    A 15-meter diameter deployable antenna has been built and is being used as an experimental test system with which to develop interdisciplinary controls, structures, and electromagnetics technology for large space antennas. The program objective is to study interdisciplinary issues important in optimizing large space antenna performance for a variety of potential users. The 15-meter antenna utilizes a hoop column structural concept with a gold-plated molybdenum mesh reflector. One feature of the design is the use of adjustable control cables to improve the paraboloid reflector shape. Manual adjustment of the cords after initial deployment improved surface smoothness relative to the build accuracy from 0.140 in. RMS to 0.070 in. Preliminary structural dynamics tests and near-field electromagnetic tests were made. The antenna is now being modified for further testing. Modifications include addition of a precise motorized control cord adjustment system to make the reflector surface smoother and an adaptive feed for electronic compensation of reflector surface distortions. Although the previous test results show good agreement between calculated and measured values, additional work is needed to study modelling limits for each discipline, evaluate the potential of adaptive feed compensation, and study closed-loop control performance in a dynamic environment.

  8. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  9. Optimization Approaches for Designing a Novel 4-Bit Reversible Comparator

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-gui; Zhang, Man-qun; Wu, Qian; Li, Yan-cheng

    2013-02-01

    Reversible logic is a new rapidly developed research field in recent years, which has been receiving much attention for calculating with minimizing the energy consumption. This paper constructs a 4×4 new reversible gate called ZRQ gate to build quantum adder and subtraction. Meanwhile, a novel 1-bit reversible comparator by using the proposed ZRQC module on the basis of ZRQ gate is proposed as the minimum number of reversible gates and quantum costs. In addition, this paper presents a novel 4-bit reversible comparator based on the 1-bit reversible comparator. One of the vital important for optimizing reversible logic is to design reversible logic circuits with the minimum number of parameters. The proposed reversible comparators in this paper can obtain superiority in terms of the number of reversible gates, input constants, garbage outputs, unit delays and quantum costs compared with the existed circuits. Finally, MATLAB simulation software is used to test and verify the correctness of the proposed 4-bit reversible comparator.

  10. Dynamic Range Size Analysis of Territorial Animals: An Optimality Approach.

    PubMed

    Tao, Yun; Börger, Luca; Hastings, Alan

    2016-10-01

    Home range sizes of territorial animals are often observed to vary periodically in response to seasonal changes in foraging opportunities. Here we develop the first mechanistic model focused on the temporal dynamics of home range expansion and contraction in territorial animals. We demonstrate how simple movement principles can lead to a rich suite of range size dynamics, by balancing foraging activity with defensive requirements and incorporating optimal behavioral rules into mechanistic home range analysis. Our heuristic model predicts three general temporal patterns that have been observed in empirical studies across multiple taxa. First, a positive correlation between age and territory quality promotes shrinking home ranges over an individual's lifetime, with maximal range size variability shortly before the adult stage. Second, poor sensory information, low population density, and large resource heterogeneity may all independently facilitate range size instability. Finally, aggregation behavior toward forage-rich areas helps produce divergent home range responses between individuals from different age classes. This model has broad applications for addressing important unknowns in animal space use, with potential applications also in conservation and health management strategies. PMID:27622879

  11. Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio

    2013-04-01

    In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and

  12. A quality by design approach to optimization of emulsions for electrospinning using factorial and D-optimal designs.

    PubMed

    Badawi, Mariam A; El-Khordagui, Labiba K

    2014-07-16

    Emulsion electrospinning is a multifactorial process used to generate nanofibers loaded with hydrophilic drugs or macromolecules for diverse biomedical applications. Emulsion electrospinnability is greatly impacted by the emulsion pharmaceutical attributes. The aim of this study was to apply a quality by design (QbD) approach based on design of experiments as a risk-based proactive approach to achieve predictable critical quality attributes (CQAs) in w/o emulsions for electrospinning. Polycaprolactone (PCL)-thickened w/o emulsions containing doxycycline HCl were formulated using a Span 60/sodium lauryl sulfate (SLS) emulsifier blend. The identified emulsion CQAs (stability, viscosity and conductivity) were linked with electrospinnability using a 3(3) factorial design to optimize emulsion composition for phase stability and a D-optimal design to optimize stable emulsions for viscosity and conductivity after shifting the design space. The three independent variables, emulsifier blend composition, organic:aqueous phase ratio and polymer concentration, had a significant effect (p<0.05) on emulsion CQAs, the emulsifier blend composition exerting prominent main and interaction effects. Scanning electron microscopy (SEM) of emulsion-electrospun NFs and desirability functions allowed modeling of emulsion CQAs to predict electrospinnable formulations. A QbD approach successfully built quality in electrospinnable emulsions, allowing development of hydrophilic drug-loaded nanofibers with desired morphological characteristics. PMID:24704153

  13. Precision and the approach to optimality in quantum annealing processors

    NASA Astrophysics Data System (ADS)

    Johnson, Mark W.

    The last few years have seen both a significant technological advance towards the practical application of, and a growing scientific interest in the underlying behaviour of quantum annealing (QA) algorithms. A series of commercially available QA processors, most recently the D-Wave 2XTM 1000 qubit processor, have provided a valuable platform for empirical study of QA at a non-trivial scale. From this it has become clear that misspecification of Hamiltonian parameters is an important performance consideration, both for the goal of studying the underlying physics of QA, as well as that of building a practical and useful QA processor. The empirical study of the physics of QA requires a way to look beyond Hamiltonian misspecification.Recently, a solver metric called 'time-to-target' was proposed as a way to compare quantum annealing processors to classical heuristic algorithms. This approach puts emphasis on analyzing a solver's short time approach to the ground state. In this presentation I will review the processor technology, based on superconducting flux qubits, and some of the known sources of error in Hamiltonian specification. I will then discuss recent advances in reducing Hamiltonian specification error, as well as review the time-to-target metric and empirical results analyzed in this way.

  14. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    NASA Astrophysics Data System (ADS)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  15. Physiological approach to optimal stereographic game programming: a technical guide

    NASA Astrophysics Data System (ADS)

    Martens, William L.; McRuer, Robert; Childs, C. Timothy; Viirree, Erik

    1996-04-01

    With the advent of mass distribution of consumer VR games comes an imperative to set health and safety standards for the hardware and software used to deliver stereographic content. This is particularly important for game developers who intend to present this stereographic content via head-mounted display (HMD). The visual discomfort that is commonly reported by the user of HMD-based VR games presumably could be kept to a minimum if game developers were provided with standards for the display of stereographic imagery. In this paper, we draw upon both results of research in binocular vision and practical methods from clinical optometry to develop some technical guidelines for programming stereographic games that have the end user's comfort and safety in mind. This paper will provide generate strategies for user- centered implementation of 3D virtual worlds, as well as pictorial examples demonstrating a natural means for rendering stereographic imagery more comfortable to view in games employing first-person perspective.

  16. Academic Departmental Management: An Application of an Interactive Multicriterion Optimization Approach.

    ERIC Educational Resources Information Center

    Geoffrion, A. M.; And Others

    This paper presents the conceptual development and application of a new interactive approach for multicriterion optimization to the aggregate operating problem of an academic department. This approach provides a mechanism for assisting an administrator in determing resource allocation decisions and only requires local trade-off and preference…

  17. A simple reliability-based topology optimization approach for continuum structures using a topology description function

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin; Zhi Zuo, Hao; Qing, Qixiang

    2016-07-01

    The structural configuration obtained by deterministic topology optimization may represent a low reliability level and lead to a high failure rate. Therefore, it is necessary to take reliability into account for topology optimization. By integrating reliability analysis into topology optimization problems, a simple reliability-based topology optimization (RBTO) methodology for continuum structures is investigated in this article. The two-layer nesting involved in RBTO, which is time consuming, is decoupled by the use of a particular optimization procedure. A topology description function approach (TOTDF) and a first order reliability method are employed for topology optimization and reliability calculation, respectively. The problem of the non-smoothness inherent in TOTDF is dealt with using two different smoothed Heaviside functions and the corresponding topologies are compared. Numerical examples demonstrate the validity and efficiency of the proposed improved method. In-depth discussions are also presented on the influence of different structural reliability indices on the final layout.

  18. An analytical approach for gain optimization in multimode fiber Raman amplifiers.

    PubMed

    Zhou, Junhe

    2014-09-01

    In this paper, an analytical approach is proposed to minimize the mode dependent gain as well as the wavelength dependent gain for the multimode fiber Raman amplifiers (MFRAs). It is shown that the optimal power integrals at the corresponding modes and wavelengths can be obtained by the non-negative least square method (NNLSM). The corresponding input pump powers can be calculated afterwards using the shooting method. It is demonstrated that if the power overlap integrals are not wavelength dependent, the optimization can be further simplified by decomposing the optimization problem into two sub optimization problems, i.e. the optimization of the gain ripple with respect to the modes, and with respect to the wavelengths. The optimization results closely match the ones in recent publications. PMID:25321517

  19. Fractional System Identification: An Approach Using Continuous Order-Distributions

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Lorenzo, Carl F.

    1999-01-01

    This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.

  20. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  1. Optimizing hereditary angioedema management through tailored treatment approaches.

    PubMed

    Nasr, Iman H; Manson, Ania L; Al Wahshi, Humaid A; Longhurst, Hilary J

    2016-01-01

    Hereditary angioedema (HAE) is a rare but serious and potentially life threatening autosomal dominant condition caused by low or dysfunctional C1 esterase inhibitor (C1-INH) or uncontrolled contact pathway activation. Symptoms are characterized by spontaneous, recurrent attacks of subcutaneous or submucosal swellings typically involving the face, tongue, larynx, extremities, genitalia or bowel. The prevalence of HAE is estimated to be 1:50,000 without known racial differences. It causes psychological stress as well as significant socioeconomic burden. Early treatment and prevention of attacks are associated with better patient outcome and lower socioeconomic burden. New treatments and a better evidence base for management are emerging which, together with a move from hospital-centered to patient-centered care, will enable individualized, tailored treatment approaches. PMID:26496459

  2. Optimal Capacity and Location Assessment of Natural Gas Fired Distributed Generation in Residential Areas

    NASA Astrophysics Data System (ADS)

    Khalil, Sarah My

    With ever increasing use of natural gas to generate electricity, installed natural gas fired microturbines are found in residential areas to generate electricity locally. This research work discusses a generalized methodology for assessing optimal capacity and locations for installing natural gas fired microturbines in a distribution residential network. The overall objective is to place microturbines to minimize the system power loss occurring in the electrical distribution network; in such a way that the electric feeder does not need any up-gradation. The IEEE 123 Node Test Feeder is selected as the test bed for validating the developed methodology. Three-phase unbalanced electric power flow is run in OpenDSS through COM server, and the gas distribution network is analyzed using GASWorkS. The continual sensitivity analysis methodology is developed to select multiple DG locations and annual simulation is run to minimize annual average losses. The proposed placement of microturbines must be feasible in the gas distribution network and should not result into gas pipeline reinforcement. The corresponding gas distribution network is developed in GASWorkS software, and nodal pressures of the gas system are checked for various cases to investigate if the existing gas distribution network can accommodate the penetration of selected microturbines. The results indicate the optimal locations suitable to place microturbines and capacity that can be accommodated by the system, based on the consideration of overall minimum annual average losses as well as the guarantee of nodal pressure provided by the gas distribution network. The proposed method is generalized and can be used for any IEEE test feeder or an actual residential distribution network.

  3. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.

  4. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.

  5. Model Predictive Optimal Control of a Time-Delay Distributed-Parameter Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents an optimal control method for a class of distributed-parameter systems governed by first order, quasilinear hyperbolic partial differential equations that arise in many physical systems. Such systems are characterized by time delays since information is transported from one state to another by wave propagation. A general closed-loop hyperbolic transport model is controlled by a boundary control embedded in a periodic boundary condition. The boundary control is subject to a nonlinear differential equation constraint that models actuator dynamics of the system. The hyperbolic equation is thus coupled with the ordinary differential equation via the boundary condition. Optimality of this coupled system is investigated using variational principles to seek an adjoint formulation of the optimal control problem. The results are then applied to implement a model predictive control design for a wind tunnel to eliminate a transport delay effect that causes a poor Mach number regulation.

  6. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  7. Design of 3-dimensional complex airplane configurations with specified pressure distribution via optimization

    NASA Technical Reports Server (NTRS)

    Kubrynski, Krzysztof

    1991-01-01

    A subcritical panel method applied to flow analysis and aerodynamic design of complex aircraft configurations is presented. The analysis method is based on linearized, compressible, subsonic flow equations and indirect Dirichlet boundary conditions. Quadratic dipol and linear source distribution on flat panels are applied. In the case of aerodynamic design, the geometry which minimizes differences between design and actual pressure distribution is found iteratively, using numerical optimization technique. Geometry modifications are modeled by surface transpiration concept. Constraints in respect to resulting geometry can be specified. A number of complex 3-dimensional design examples are presented. The software is adopted to personal computers, and as result an unexpected low cost of computations is obtained.

  8. Optimizations on supply and distribution of dissolved oxygen in constructed wetlands: A review.

    PubMed

    Liu, Huaqing; Hu, Zhen; Zhang, Jian; Ngo, Huu Hao; Guo, Wenshan; Liang, Shuang; Fan, Jinlin; Lu, Shaoyong; Wu, Haiming

    2016-08-01

    Dissolved oxygen (DO) is one of the most important factors that can influence pollutants removal in constructed wetlands (CWs). However, problems of insufficient oxygen supply and inappropriate oxygen distribution commonly exist in traditional CWs. Detailed analyses of DO supply and distribution characteristics in different types of CWs were introduced. It can be concluded that atmospheric reaeration (AR) served as the promising point on oxygen intensification. The paper summarized possible optimizations of DO in CWs to improve its decontamination performance. Process (tidal flow, drop aeration, artificial aeration, hybrid systems) and parameter (plant, substrate and operating) optimizations are particularly discussed in detail. Since economic and technical defects are still being cited in current studies, future prospects of oxygen research in CWs terminate this review. PMID:27177713

  9. Optimal Allocation of Distributed Generation Minimizing Loss and Voltage Sag Problem-Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Biswas, S.; Goswami, S. K.

    2010-10-01

    In the present paper an attempt has been made to place the distributed generation at an optimal location so as to improve the technical as well as economical performance. Among technical issues the sag performance and the loss have been considered. Genetic algorithm method has been used as the optimization technique in this problem. For sag analysis the impact of 3-phase symmetrical short circuit faults is considered. Total load disturbed during the faults is considered as an indicator of sag performance. The solution algorithm is demonstrated on a 34 bus radial distribution system with some lateral branches. For simplicity only one DG of predefined capacity is considered. MATLAB has been used as the programming environment.

  10. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    NASA Astrophysics Data System (ADS)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  11. Optimizing the bandwidth and noise performance of distributed multi-pump Raman amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xueming; Li, Yanhe

    2004-02-01

    Based on hybrid genetic algorithm (HGA), the signal bandwidth of the distributed multi-pump Raman amplifiers is optimized, and the corresponding noise figure is obtained. The results show that: (1) the optimal signal bandwidth Δ λ decreases with the increase of the span length L, e.g., Δ λ is 79.6 nm for L=50 km and 41.5 nm for L=100 km under our simulated conditions; (2) the relationship between Δ λ and L is approximately linear; (3) the equivalent noise figure can be negative and increases with the extension of L; (4) there are one or several global maximum signal bandwidth on the determinate conditions; (5) to realize the fixed Δ λ, several candidates can be obtained by means of HGA, as has important applications on the design of distributed multi-pump Raman amplifiers in practice.

  12. Optimal Capacitor Placement in Radial Distribution Feeders Using Fuzzy-Differential Evolution for Dynamic Load Condition

    NASA Astrophysics Data System (ADS)

    Kannan, S. M.; Renuga, P.; Kalyani, S.; Muthukumaran, E.

    2015-12-01

    This paper proposes new methods to select the optimal values of fixed and switched shunt capacitors in Radial distribution feeders for varying load conditions so as to maximize the annual savings and minimizes the energy loss by taking the capacitor cost into account. The identification of the weak buses, where the capacitors should be placed is decided by a set of rules given by the fuzzy expert system. Then the sizing of the fixed and switched capacitors is modeled using differential evolution (DE) and particle swarm optimization (PSO). A case study with an existing 15 bus rural distribution feeder is presented to illustrate the applicability of the algorithm. Simulation results show the better saving in cost over previous capacitor placement algorithm.

  13. Identifying the optimal spatially and temporally invariant root distribution for a semiarid environment

    NASA Astrophysics Data System (ADS)

    Sivandran, Gajan; Bras, Rafael L.

    2012-12-01

    In semiarid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Vegetation roots have strong control over this partitioning, and assuming a static root profile, predetermine the manner in which this partitioning is undertaken.A coupled, dynamic vegetation and hydrologic model, tRIBS + VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point-scale simulations were carried out using two spatially and temporally invariant rooting schemes: uniform: a one-parameter model and logistic: a two-parameter model. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semiarid Walnut Gulch Experimental Watershed (WGEW) in Arizona. A series of simulations were undertaken exploring the parameter space of both rooting schemes and the optimal root distribution for the simulation, which was defined as the root distribution with the maximum mean transpiration over a 100-yr period, and this was identified. This optimal root profile was determined for five generic soil textures and two plant-functional types (PFTs) to illustrate the role of soil texture on the partitioning of moisture at the land surface. The simulation results illustrate the strong control soil texture has on the partitioning of rainfall and consequently the depth of the optimal rooting profile. High-conductivity soils resulted in the deepest optimal rooting profile with land surface moisture fluxes dominated by transpiration. As we move toward the lower conductivity end of the soil spectrum, a shallowing of the optimal rooting profile is observed and evaporation gradually becomes the dominate flux from the land surface. This study offers a methodology through which local plant, soil, and climate can be

  14. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    2016-01-01

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites. PMID:27438233

  15. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  16. Collimator angle influence on dose distribution optimization for vertebral metastases using volumetric modulated arc therapy

    SciTech Connect

    Mancosu, Pietro; Cozzi, Luca; Fogliata, Antonella; Lattuada, Paola; Reggiori, Giacomo; Cantone, Marie Claire; Navarria, Pierina; Scorsetti, Marta

    2010-08-15

    Purpose: The cylindrical symmetry of vertebrae favors the use of volumetric modulated arc therapy in generating a dose ''hole'' on the center of the vertebrae limiting the dose to the spinal cord. The authors have evaluated if collimator angle is a significant parameter for dose distribution optimization in vertebral metastases. Methods: Three patients with one-three vertebrae involved were considered. Twenty-one differently optimized plans (nine single-arc and 12 double-arc plans) were performed, testing various collimator angle positions. Clinical target volume was defined as the whole vertebrae, excluding the spinal cord canal. The planning target volume (PTV) was defined as CTV+5 mm. Dose prescription was 5x4 Gy{sup 2} with normalization to PTV mean dose. The dose at 1 cm{sup 3} of spinal cord was limited to 11.5Gy. Results: The best plans in terms of target coverage and spinal cord sparing were achieved by two arcs and Arc1-80 deg. and Arc2-280 deg. collimator angles for all the cases considered (i.e., leaf travel parallel to the spinal cord primary orientation). If one arc is used, only 80 deg. reached the objectives. Conclusions: This study demonstrated the role of collimation rotation for the vertebrae metastasis irradiation, with the leaf travel parallel to the spinal cord primary orientation to be better than other solutions. Thus, optimal choice of collimator angle increases the optimization freedom to shape a desired dose distribution.

  17. Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.

    PubMed

    Ghosh, Subhojit; Samanta, Susovon

    2014-07-01

    This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter. PMID:24768082

  18. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes. PMID:26117286

  19. A Resource Constrained Distributed Constraint Optimization Method using Resource Constraint Free Pseudo-tree

    NASA Astrophysics Data System (ADS)

    Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makoto; Matsuo, Hiroshi

    Cooperative problem solving with shared resources is important in practical multi-agent systems. Resource constraints are necessary to handle practical problems such as distributed task scheduling with limited resource availability. As a fundamental formalism for multi-agent cooperation, the Distributed Constraint Optimization Problem (DCOP) has been investigated. With DCOPs, the agent states and the relationships between agents are formalized into a constraint optimization problem. However, in the original DCOP framework, constraints for resources that are consumed by teams of agents are not well supported. A framework called Resource Constrained Distributed Constraint Optimization Problem (RCDCOP) has recently been proposed. In RCDCOPs, a limit on resource usage is represented as an n-ary constraint. Previous research addressing RCDCOPs employ a pseudo-tree based solver. The pseudo-tree is an important graph structure for constraint networks. A pseudo-tree implies a partial ordering of variables. However, n-ary constrained variables, which are placed on a single path of the pseudo-tree, decrease efficiency of the solver. We propose another method using (i) a pseudo-tree that is generated ignoring resource constraints and (ii) virtual variables representing the usage of resources. However the virtual variables increase search space. To improve pruning efficiency of search, (iii) we apply a set of upper/lower bounds that are inferred from resource constraints. The efficiency of the proposed method is evaluated by experiment.

  20. Modelling and approaching pragmatic interoperability of distributed geoscience data

    NASA Astrophysics Data System (ADS)

    Ma, Xiaogang

    2010-05-01

    , intention, procedure, consequence, etc.) of local pragmatic contexts and thus context-dependent. Elimination of these elements will inevitably lead to information loss in semantic mediation between local ontologies. Correspondingly, understanding and effect of exchanged data in a new context may differ from that in its original context. Another problem is the dilemma on how to find a balance between flexibility and standardization of local ontologies, because ontologies are not fixed, but continuously evolving. It is commonly realized that we cannot use a unified ontology to replace all local ontologies because they are context-dependent and need flexibility. However, without coordination of standards, freely developed local ontologies and databases will bring enormous work of mediation between them. Finding a balance between standardization and flexibility for evolving ontologies, in a practical sense, requires negotiations (i.e. conversations, agreements and collaborations) between different local pragmatic contexts. The purpose of this work is to set up a computer-friendly model representing local pragmatic contexts (i.e. geodata sources), and propose a practical semantic negotiation procedure for approaching pragmatic interoperability between local pragmatic contexts. Information agents, objective facts and subjective dimensions are reviewed as elements of a conceptual model for representing pragmatic contexts. The author uses them to draw a practical semantic negotiation procedure approaching pragmatic interoperability of distributed geodata. The proposed conceptual model and semantic negotiation procedure were encoded with Description Logic, and then applied to analyze and manipulate semantic negotiations between different local ontologies within the National Mineral Resources Assessment (NMRA) project of China, which involves multi-source and multi-subject geodata sharing.

  1. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  2. Optimization of hydrological parameters of a distributed runoff model based on multiple flood events

    NASA Astrophysics Data System (ADS)

    Miyamoto, Mamoru; Matsumoto, Kazuhiro; Tsuda, Morimasa; Yamakage, Yuzuru; Iwami, Yoichi; Anai, Hirokazu

    2015-04-01

    The error sources of flood forecasting by a runoff model commonly include input data, model structures, and parameter settings. This study focused on a calibration procedure to minimize errors due to parameter settings. Although many studies have been done on hydrological parameter optimization, they are mostly about individual optimization cases applying a specific optimization technique to a specific flood. Consequently, it is difficult to determine the most appropriate parameter set to make forecasts on future floods, because optimized parameter sets vary by flood type. Thus, this study aimed to develop a comprehensive method for optimizing hydrological parameters of a distributed runoff model for future flood forecasting. A distributed runoff model, PWRI-DHM, was applied to the Gokase River basin of 1,820km2 in Japan in this study. The model with gridded two-layer tanks for the entire target river basin includes hydrological parameters, such as hydraulic conductivity, surface roughness and runoff coefficient, which are set according to land-use and soil-type distributions. Global data sets, e.g., Global Map and DSMW (Digital Soil Map of the World), were employed as input data such as elevation, land use and soil type. Thirteen optimization algorithms such as GA, PSO and DEA were carefully selected from seventy-four open-source algorithms available for public use. These algorithms were used with three error assessment functions to calibrate the parameters of the model to each of fifteen past floods in the predetermined search range. Fifteen optimized parameter sets corresponding to the fifteen past floods were determined by selecting the best sets from the calibration results in terms of reproducible accuracy. This process helped eliminate bias due to type of optimization algorithms. Although the calibration results of each parameter were widely distributed in the search range, statistical significance was found in comparisons between the optimized parameters

  3. Optimal management of stationary lithium-ion battery system in electricity distribution grids

    NASA Astrophysics Data System (ADS)

    Purvins, Arturs; Sumner, Mark

    2013-11-01

    The present article proposes an optimal battery system management model in distribution grids for stationary applications. The main purpose of the management model is to maximise the utilisation of distributed renewable energy resources in distribution grids, preventing situations of reverse power flow in the distribution transformer. Secondly, battery management ensures efficient battery utilisation: charging at off-peak prices and discharging at peak prices when possible. This gives the battery system a shorter payback time. Management of the system requires predictions of residual distribution grid demand (i.e. demand minus renewable energy generation) and electricity price curves (e.g. for 24 h in advance). Results of a hypothetical study in Great Britain in 2020 show that the battery can contribute significantly to storing renewable energy surplus in distribution grids while being highly utilised. In a distribution grid with 25 households and an installed 8.9 kW wind turbine, a battery system with rated power of 8.9 kW and battery capacity of 100 kWh can store 7 MWh of 8 MWh wind energy surplus annually. Annual battery utilisation reaches 235 cycles in per unit values, where one unit is a full charge-depleting cycle depth of a new battery (80% of 100 kWh).

  4. A methodological integrated approach to optimize a hydrogeological engineering work

    NASA Astrophysics Data System (ADS)

    Loperte, A.; Satriani, A.; Bavusi, M.; Cerverizzo, G.

    2012-04-01

    The geoelectrical survey applied to hydraulic engineering is a well known in literature. However, despite of its large number of successful cases of application, the use of geophysics is still often not considered; this due to different reasons as: the poor knowledge of the potential performances; the difficulties in the practical implementation; the cost limitations. In this work, an integrated study of non-invasive (geoelectrical) and direct surveys is described, aimed at identifying a subsoil foundation where it possible to set up a watertight concrete structure able to protect the purifier of Senise, a little town in Basilicata Region (Southern Italy). The purifier, used by several villages, is located in a particularly dangerous hydrogeological position, as it is very close to the Sinni river, which has been obstructed from many years by the Monte Cotugno dam. During the rainiest periods, the river could flood the purifier, causing the drainage of waste waters in the Monte Cotugno artificial lake. The purifier is located in Pliocene- Calabrian clay and clay - marly formations covered by about 10m layer of alluvional gravelly-sandy materials carried by the Sinni river. The electrical resistivity tomography acquired with the Wenner Schlumberger array was revealed meaningful for the purpose to identify the potential depth of impermeable clays with high accuracy. In particular, the geoelectrical acquisition, orientated along the long side of purifier, was carried out using a multielectrodes system with 48 electrodes 2 m spaced leading to an achievable investigation depth of about 15 m The subsequent direct surveys have confirmed this depth so that it was possible to set up the foundation concrete structure with precision to protect the purifier. It is worth noting that the use of this methodological approach has allowed a remarkable economic saving as it has made it possible to correct the wrong information, regarding the depth of impermeably clays, previously

  5. Analysis and optimization of a solar thermal power generation and desalination system using a novel approach

    NASA Astrophysics Data System (ADS)

    Torres, Leovigildo

    Using a novel approach for a Photovoltaic-Thermal (PV-T) panel system, analytical and optimization analyses were performed for electricity generation as well as desalinated water production. The PV-T panel was design with a channel under it where seawater would be housed at a constant pressure of 2.89 psia and ambient temperature of 520°R. The surface of the PV panel was modeled by a high absorption black chrome surface. Irradiation flux on the surface and the heat addition on the saltwater were calculated hourly between 9:00am and 6:00pm. At steady state conditions, the saturation temperature of 600°R was limited at PV tank-channel outlet and the evaporation rate was measured to be 2.53 lbm/hr-ft2. The desorbed air then passed through a turbine, where it generated electrical power at 0.84 Btu/hr, condensing into desalinated water at the outlet. Optimization was performed for max capacity yield based on available temperature distribution of 600°R to 1050°R at PV tank-channel outlet. This gave an energy generation range for the turbine of 0.84 Btu/hr to 3.84 Btu/hr, while the desalinated water production range was 2.53 lbm/hr-ft2 to 10.65 lbm/hr-ft2. System efficiency was found to be between 7.5% to 24.3%. Water production efficiency was found to be 40% to 43%.

  6. A graph-based ant colony optimization approach for process planning.

    PubMed

    Wang, JinFeng; Fan, XiaoLiang; Wan, Shuting

    2014-01-01

    The complex process planning problem is modeled as a combinatorial optimization problem with constraints in this paper. An ant colony optimization (ACO) approach has been developed to deal with process planning problem by simultaneously considering activities such as sequencing operations, selecting manufacturing resources, and determining setup plans to achieve the optimal process plan. A weighted directed graph is conducted to describe the operations, precedence constraints between operations, and the possible visited path between operation nodes. A representation of process plan is described based on the weighted directed graph. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPC). Two cases have been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been conducted to demonstrate the feasibility and efficiency of the proposed approach. PMID:24995355

  7. Medium optimization of protease production by Brevibacterium linens DSM 20158, using statistical approach

    PubMed Central

    Shabbiri, Khadija; Adnan, Ahmad; Jamil, Sania; Ahmad, Waqar; Noor, Bushra; Rafique, H.M.

    2012-01-01

    Various cultivation parameters were optimized for the production of extra cellular protease by Brevibacterium linens DSM 20158 grown in solid state fermentation conditions using statistical approach. The cultivation variables were screened by the Plackett–Burman design and four significant variables (soybean meal, wheat bran, (NH4)2SO4 and inoculum size were further optimized via central composite design (CCD) using a response surface methodological approach. Using the optimal factors (soybean meal 12.0g, wheat bran 8.50g, (NH4)2SO4) 0.45g and inoculum size 3.50%), the rate of protease production was found to be twofold higher in the optimized medium as compared to the unoptimized reference medium. PMID:24031928

  8. A Graph-Based Ant Colony Optimization Approach for Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Wan, Shuting

    2014-01-01

    The complex process planning problem is modeled as a combinatorial optimization problem with constraints in this paper. An ant colony optimization (ACO) approach has been developed to deal with process planning problem by simultaneously considering activities such as sequencing operations, selecting manufacturing resources, and determining setup plans to achieve the optimal process plan. A weighted directed graph is conducted to describe the operations, precedence constraints between operations, and the possible visited path between operation nodes. A representation of process plan is described based on the weighted directed graph. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPC). Two cases have been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been conducted to demonstrate the feasibility and efficiency of the proposed approach. PMID:24995355

  9. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  10. Monte Carlo verification of IMRT dose distributions from a commercial treatment planning optimization system

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Pawlicki, T.; Jiang, S. B.; Li, J. S.; Deng, J.; Mok, E.; Kapur, A.; Xing, L.; Ma, L.; Boyer, A. L.

    2000-09-01

    The purpose of this work was to use Monte Carlo simulations to verify the accuracy of the dose distributions from a commercial treatment planning optimization system (Corvus, Nomos Corp., Sewickley, PA) for intensity-modulated radiotherapy (IMRT). A Monte Carlo treatment planning system has been implemented clinically to improve and verify the accuracy of radiotherapy dose calculations. Further modifications to the system were made to compute the dose in a patient for multiple fixed-gantry IMRT fields. The dose distributions in the experimental phantoms and in the patients were calculated and used to verify the optimized treatment plans generated by the Corvus system. The Monte Carlo calculated IMRT dose distributions agreed with the measurements to within 2% of the maximum dose for all the beam energies and field sizes for both the homogeneous and heterogeneous phantoms. The dose distributions predicted by the Corvus system, which employs a finite-size pencil beam (FSPB) algorithm, agreed with the Monte Carlo simulations and measurements to within 4% in a cylindrical water phantom with various hypothetical target shapes. Discrepancies of more than 5% (relative to the prescribed target dose) in the target region and over 20% in the critical structures were found in some IMRT patient calculations. The FSPB algorithm as implemented in the Corvus system is adequate for homogeneous phantoms (such as prostate) but may result in significant under- or over-estimation of the dose in some cases involving heterogeneities such as the air-tissue, lung-tissue and tissue-bone interfaces.

  11. A Synergistic Approach of Desirability Functions and Metaheuristic Strategy to Solve Multiple Response Optimization Problems

    NASA Astrophysics Data System (ADS)

    Bera, Sasadhar; Mukherjee, Indrajit

    2010-10-01

    Ensuring quality of a product is rarely based on observations of a single quality characteristic. Generally, it is based on observations of family of properties, so-called `multiple responses'. These multiple responses are often interacting and are measured in variety of units. Due to presence of interaction(s), overall optimal conditions for all the responses rarely result from isolated optimal condition of individual response. Conventional optimization techniques, such as design of experiment, linear and nonlinear programmings are generally recommended for single response optimization problems. Applying any of these techniques for multiple response optimization problem may lead to unnecessary simplification of the real problem with several restrictive model assumptions. In addition, engineering judgements or subjective ways of decision making may play an important role to apply some of these conventional techniques. In this context, a synergistic approach of desirability functions and metaheuristic technique is a viable alternative to handle multiple response optimization problems. Metaheuristics, such as simulated annealing (SA) and particle swarm optimization (PSO), have shown immense success to solve various discrete and continuous single response optimization problems. Instigated by those successful applications, this chapter assesses the potential of a Nelder-Mead simplex-based SA (SIMSA) and PSO to resolve varied multiple response optimization problems. The computational results clearly indicate the superiority of PSO over SIMSA for the selected problems.

  12. Optimal Flight for Ground Noise Reduction in Helicopter’s Landing Approach

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi; Ishii, Hirokazu; Uchida, Junichi; Gomi, Hiromi; Matayoshi, Naoki; Okuno, Yoshinori

    This study aims to obtain the optimal flights of a helicopter that reduce ground noise in its landing approach with an optimization technique and to conduct flight tests for confirming the effectiveness of the optimal solutions. Past experiments of JAXA (Japan Aerospace Exploration Agency) shows the noise of the helicopter varies significantly according to its flight conditions, especially depending on the flight path angle. We therefore build a simple noise model of the helicopter, in which the level of the noise generated from a point sound source is a function only of the flight path angle. By using equations of motion for flight in a vertical plane, we define optimal control problems for minimizing noise levels measured at points on the ground surface, and obtain optimal controls for specified initial altitudes, flight constraints, and wind conditions. The obtained optimal flights avoid the flight path angle which generates the large noise and decrease the flight time, which are different from the conventional flight. Finally, we verify the validity of the optimal flight patterns by the flight experiments. The actual flights following the optimal ones also result in the noise reduction, which shows the effectiveness of the optimization.

  13. The distribution of all French communes: A composite parametric approach

    NASA Astrophysics Data System (ADS)

    Calderín-Ojeda, Enrique

    2016-05-01

    The distribution of the size of all French settlements (communes) from 1962 to 2012 is examined by means of a three-parameter composite Lognormal-Pareto distribution. This model is based on a Lognormal density up to an unknown threshold value and a Pareto density thereafter. Recent findings have shown that the untruncated settlement size data is in excellent agreement with the Lognormal distribution in the lower and central parts of the empirical distribution, but it follows a power law in the upper tail. For that reason, this probabilistic family, that nests both models, seems appropriate to describe urban agglomeration in France. The outcomes of this paper reveal that for the early periods (1962-1975) the upper quartile of the commune size data adheres closely to a power law distribution, whereas for later periods (2006-2012) most of the city size dynamics is explained by a Lognormal model.

  14. Minimization of Blast furnace Fuel Rate by Optimizing Burden and Gas Distribution

    SciTech Connect

    Dr. Chenn Zhou

    2012-08-15

    The goal of the research is to improve the competitive edge of steel mills by using the advanced CFD technology to optimize the gas and burden distributions inside a blast furnace for achieving the best gas utilization. A state-of-the-art 3-D CFD model has been developed for simulating the gas distribution inside a blast furnace at given burden conditions, burden distribution and blast parameters. The comprehensive 3-D CFD model has been validated by plant measurement data from an actual blast furnace. Validation of the sub-models is also achieved. The user friendly software package named Blast Furnace Shaft Simulator (BFSS) has been developed to simulate the blast furnace shaft process. The research has significant benefits to the steel industry with high productivity, low energy consumption, and improved environment.

  15. A 'cheap' optimal control approach to estimate muscle forces in musculoskeletal systems.

    PubMed

    Menegaldo, Luciano Luporini; de Toledo Fleury, Agenor; Weber, Hans Ingo

    2006-01-01

    This paper shows a new method to estimate the muscle forces in musculoskeletal systems based on the inverse dynamics of a multi-body system associated optimal control. The redundant actuator problem is solved by minimizing a time-integral cost function, augmented with a torque-tracking error function, and muscle dynamics is considered through differential constraints. The method is compared to a previously implemented human posture control problem, solved using a Forward Dynamics Optimal Control approach and to classical static optimization, with two different objective functions. The new method provides very similar muscle force patterns when compared to the forward dynamics solution, but the computational cost is much smaller and the numerical robustness is increased. The results achieved suggest that this method is more accurate for the muscle force predictions when compared to static optimization, and can be used as a numerically 'cheap' alternative to the forward dynamics and optimal control in some applications. PMID:16033695

  16. An Efficient Approach to Obtain Optimal Load Factors for Structural Design

    PubMed Central

    Bojórquez, Juan

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  17. An efficient approach to obtain optimal load factors for structural design.

    PubMed

    Bojórquez, Juan; Ruiz, Sonia E

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  18. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  19. Time-optimal three-axis reorientation of asymmetric rigid spacecraft via homotopic approach

    NASA Astrophysics Data System (ADS)

    Li, Jing

    2016-05-01

    This paper investigates the time-optimal rest-to-rest three-axis reorientation of asymmetric rigid spacecraft. First, time-optimal solutions for the inertially symmetric rigid spacecraft (ISRS) three-axis reorientation are briefly reviewed. By utilizing initial costates and reorientation time of the ISRS time-optimal solution, the homotopic approach is introduced to solve the asymmetric rigid spacecraft time-optimal three-axis reorientation problem. The main merit is that the homotopic approach can start automatically and reliably, which would facilitate the real-time generation of open-loop time-optimal solutions for attitude slewing maneuvers. Finally, numerical examples are given to illustrate the performance of the proposed method. For principle axis reorientation, numerical results and analytical derivations show that, multiple time-optimal solutions exist and relations between them are given. For generic reorientation problem, though mathematical rigorous proof is not available to date, numerical results also indicated the existing of multiple time-optimal solutions.

  20. A biarc-based shape optimization approach to reduce stress concentration effects

    NASA Astrophysics Data System (ADS)

    Meng, Liang; Zhang, Wei-Hong; Zhu, Ji-Hong; Xia, Liang

    2014-06-01

    In order to avoid stress concentration, the shape boundary must be properly designed via shape optimization. Traditional shape optimization approach eliminates the stress concentration effect by using free-form curve to present the design boundaries without taking the machinability into consideration. In most numerical control (NC) machines, linear as well as circular interpolations are used to generate the tool path. Non-circular curves, such as nonuniform rotational B-spline (NURBS), need other more advanced interpolation functions to formulate the tool path. Forming the circular tool path by approximating the optimal free curve boundary with arcs or biarcs is another option. However, these two approaches are both at a cost of sharp expansion of program code and long machining time consequently. Motivated by the success of recent researches on biarcs, a reliable shape optimization approach is proposed in this work to directly optimize the shape boundaries with biarcs while the efficiency and precision of traditional method are preserved. Finally, the approach is validated by several illustrative examples.

  1. An approach to distributed execution of Ada programs

    NASA Technical Reports Server (NTRS)

    Volz, R. A.; Krishnan, P.; Theriault, R.

    1987-01-01

    Intelligent control of the Space Station will require the coordinated execution of computer programs across a substantial number of computing elements. It will be important to develop large subsets of these programs in the form of a single program which executes in a distributed fashion across a number of processors. A translation strategy for distributed execution of Ada programs in which library packages and subprograms may be distributed is described. A preliminary version of the translator is operational. Simple data objects (no records or arrays as yet), subprograms, and static tasks may be referenced remotely.

  2. Improving flash flood forecasting with distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yangbo

    2016-04-01

    In China, flash food is usually regarded as flood occured in small and medium sized watersheds with drainage area less than 200 km2, and is mainly induced by heavy rains, and occurs in where hydrological observation is lacked. Flash flood is widely observed in China, and is the flood causing the most casualties nowadays in China. Due to hydrological data scarcity, lumped hydrological model is difficult to be employed for flash flood forecasting which requires lots of observed hydrological data to calibrate model parameters. Physically based distributed hydrological model discrete the terrain of the whole watershed into a number of grid cells at fine resolution, assimilate different terrain data and precipitation to different cells, and derive model parameteris from the terrain properties, thus having the potential to be used in flash flood forecasting and improving flash flood prediction capability. In this study, the Liuxihe Model, a physically based distributed hydrological model mainly proposed for watershed flood forecasting is employed to simulate flash floods in the Ganzhou area in southeast China, and models have been set up in 5 watersheds. Model parameters have been derived from the terrain properties including the DEM, the soil type and land use type, but the result shows that the flood simulation uncertainty is high, which may be caused by parameter uncertainty, and some kind of uncertainty control is needed before the model could be used in real-time flash flood forecastin. Considering currently many Chinese small and medium sized watersheds has set up hydrological observation network, and a few flood events could be collected, it may be used for model parameter optimization. For this reason, an automatic model parameter optimization algorithm using Particle Swam Optimization(PSO) is developed to optimize the model parameters, and it has been found that model parameters optimized even only with one observed flood events could largely reduce the flood

  3. A new optomechanical structural optimization approach: coupling FEA and raytracing sensitivity matrices

    NASA Astrophysics Data System (ADS)

    Riva, M.

    2012-09-01

    The design of astronomical instrument is growing in dimension and complexity following ELT class telescopes. The availability of new structural material like composite ones is asking for more robust and reliable designing numerical tools. This paper wants to show a new opto-mechanical optimization approach developed starting from a previously developed integrated design framework. The Idea is to reduce number of iteration in a multi- variable structural optimization taking advantage of the embedded sensitivity routines that are available both in FEA software and in raytracing ones. This approach provide reduced iteration number mainly in case of high number of structural variable parameters.

  4. Electric power scheduling - A distributed problem-solving approach

    NASA Technical Reports Server (NTRS)

    Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.

    1990-01-01

    Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity.

  5. W± bosons production in the quantum statistical parton distributions approach

    NASA Astrophysics Data System (ADS)

    Bourrely, Claude; Buccella, Franco; Soffer, Jacques

    2013-10-01

    We consider W± gauge bosons production in connection with recent results from BNL-RHIC and FNAL-Tevatron and interesting predictions from the statistical parton distributions. They concern relevant aspects of the structure of the nucleon sea and the high-x region of the valence quark distributions. We also give predictions in view of future proton-neutron collisions experiments at BNL-RHIC.

  6. Study and optimization of gas flow and temperature distribution in a Czochralski configuration

    NASA Astrophysics Data System (ADS)

    Fang, H. S.; Jin, Z. L.; Huang, X. M.

    2012-12-01

    The Czochralski (Cz) method has virtually dominated the entire production of bulk single crystals with high productivity. Since the Cz-grown crystals are cylindrical, axisymmetric hot zone arrangement is required for an ideally high-quality crystal growth. However, due to three-dimensional effects the flow pattern and temperature field are inevitably non-axisymmetric. The grown crystal suffers from many defects, among which macro-cracks and micro-dislocation are mainly related to inhomogeneous temperature distribution during the growth and cooling processes. The task of the paper is to investigate gas partition and temperature distribution in a Cz configuration, and to optimize the furnace design for the reduction of the three-dimensional effects. The general design is found to be unfavorable to obtain the desired temperature conditions. Several different types of the furnace designs, modified at the top part of the side insulation, are proposed for a comparative analysis. The optimized one is chosen for further study, and the results display the excellence of the proposed design in suppression of three-dimensional effects to achieve relatively axisymmetric flow pattern and temperature distribution for the possible minimization of thermal stress related crystal defects.

  7. Parallel multi-join query optimization algorithm for distributed sensor network in the internet of things

    NASA Astrophysics Data System (ADS)

    Zheng, Yan

    2015-03-01

    Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.

  8. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  9. Geometry Design Optimization of Functionally Graded Scaffolds for Bone Tissue Engineering: A Mechanobiological Approach

    PubMed Central

    Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe

    2016-01-01

    Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry

  10. A method to optimize sampling locations for measuring indoor air distributions

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan

    2015-02-01

    Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.

  11. Optimal reconstruction of historical water supply to a distribution system: A. Methodology.

    PubMed

    Aral, M M; Guan, J; Maslia, M L; Sautner, J B; Gillig, R E; Reyes, J J; Williams, R C

    2004-09-01

    The New Jersey Department of Health and Senior Services (NJDHSS), with support from the Agency for Toxic Substances and Disease Registry (ATSDR) conducted an epidemiological study of childhood leukaemia and nervous system cancers that occurred in the period 1979 through 1996 in Dover Township, Ocean County, New Jersey. The epidemiological study explored a wide variety of possible risk factors, including environmental exposures. ATSDR and NJDHSS determined that completed human exposure pathways to groundwater contaminants occurred in the past through private and community water supplies (i.e. the water distribution system serving the area). To investigate this exposure, a model of the water distribution system was developed and calibrated through an extensive field investigation. The components of this water distribution system, such as number of pipes, number of tanks, and number of supply wells in the network, changed significantly over a 35-year period (1962--1996), the time frame established for the epidemiological study. Data on the historical management of this system was limited. Thus, it was necessary to investigate alternative ways to reconstruct the operation of the system and test the sensitivity of the system to various alternative operations. Manual reconstruction of the historical water supply to the system in order to provide this sensitivity analysis was time-consuming and labour intensive, given the complexity of the system and the time constraints imposed on the study. To address these issues, the problem was formulated as an optimization problem, where it was assumed that the water distribution system was operated in an optimum manner at all times to satisfy the constraints in the system. The solution to the optimization problem provided the historical water supply strategy in a consistent manner for each month of the study period. The non-uniqueness of the selected historical water supply strategy was addressed by the formulation of a second

  12. A Direct Approach for Minimum Fuel Maneuvers of Distributed Spacecraft in Multiple Flight Regimes

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P; Cooley, D. S.; Guzman, Jose J.

    2004-01-01

    In this work we present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a fully non-linear dynamics model and parameterize the problem to allow the method to be applicable to any flight regime. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as constellations. We assume that the desired relative motion is driven by mission requirements and has been determined a-priori. The goal of this work is to develop a technique to achieve the desired relative motion in a minimum fuel manner. To permit applicability to multiple flight regimes, we have chosen to parameterize the cost function in terms of the maneuver times expressed in a useful time system and the maneuver locations expressed in their Cartesian vector representations. We also include as an independent variable the initial reference orbit to solve for the optimal injection orbit to minimize and equalize the fuel expenditure of distributed sets of spacecraft with large inter-spacecraft separations. In this work we derive the derivatives of the cost and constraints with respect to all of the independent variables.

  13. Assessing Impact of Large-Scale Distributed Residential HVAC Control Optimization on Electricity Grid Operation and Renewable Energy Integration

    NASA Astrophysics Data System (ADS)

    Corbin, Charles D.

    Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.

  14. Supervisor Localization: A Top-Down Approach to Distributed Control of Discrete-Event Systems

    NASA Astrophysics Data System (ADS)

    Cai, K.; Wonham, W. M.

    2009-03-01

    A purely distributed control paradigm is proposed for discrete-event systems (DES). In contrast to control by one or more external supervisors, distributed control aims to design built-in strategies for individual agents. First a distributed optimal nonblocking control problem is formulated. To solve it, a top-down localization procedure is developed which systematically decomposes an external supervisor into local controllers while preserving optimality and nonblockingness. An efficient localization algorithm is provided to carry out the computation, and an automated guided vehicles (AGV) example presented for illustration. Finally, the 'easiest' and 'hardest' boundary cases of localization are discussed.

  15. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  16. A Complex Network Approach to Distributional Semantic Models

    PubMed Central

    Utsumi, Akira

    2015-01-01

    A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models. PMID:26295940

  17. Electric power scheduling: A distributed problem-solving approach

    NASA Technical Reports Server (NTRS)

    Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.

    1990-01-01

    Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity. The value-driven free-market economic model is such a tool.

  18. Efficient use of hybrid Genetic Algorithms in the gain optimization of distributed Raman amplifiers.

    PubMed

    Neto, B; Teixeira, A L J; Wada, N; André, P S

    2007-12-24

    In this paper, we propose an efficient and accurate method that combines the Genetic Algorithm (GA) with the Nelder-Mead method in order to obtain the gain optimization of distributed Raman amplifiers. By using these two methods together, the advantages of both are combined: the convergence of the GA and the high accuracy of the Nelder-Mead. To enhance the convergence of the GA, several features were examined and correlated with fitting errors. It is also shown that when the right moment to switch between methods is chosen, the computation time can be reduced by a factor of two. PMID:19551045

  19. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence

    2015-01-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results. PMID:27041778

  20. Transverse momentum dependent distribution functions in a covariant parton model approach with quark orbital motion

    SciTech Connect

    Efremov, A. V.; Teryaev, O. V.; Schweitzer, P.; Zavada, P.

    2009-07-01

    Transverse parton momentum dependent distribution functions (TMDs) of the nucleon are studied in a covariant model, which describes the intrinsic motion of partons in terms of a covariant momentum distribution. The consistency of the approach is demonstrated, and model relations among TMDs are studied. As a by-product it is shown how the approach allows to formulate the nonrelativistic limit.