Sample records for cost minimization problem

  1. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  2. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  3. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  4. Minimizing communication cost among distributed controllers in software defined networks

    NASA Astrophysics Data System (ADS)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  5. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  6. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  7. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  8. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  9. Second-order polynomial model to solve the least-cost lumber grade mix problem

    Treesearch

    Urs Buehlmann; Xiaoqiu Zuo; R. Edward Thomas

    2010-01-01

    Material costs when cutting solid wood parts from hardwood lumber for secondary wood products manufacturing account for 20 to 50 percent of final product cost. These costs can be minimized by proper selection of the lumber quality used. The lumber quality selection problem is referred to as the least-cost lumber grade mix problem in the industry. The objective of this...

  10. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  11. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  12. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  13. Technological Minimalism: A Cost-Effective Alternative for Course Design and Development.

    ERIC Educational Resources Information Center

    Lorenzo, George

    2001-01-01

    Discusses the use of minimum levels of technology, or technological minimalism, for Web-based multimedia course content. Highlights include cost effectiveness; problems with video streaming, the use of XML for Web pages, and Flash and Java applets; listservs instead of proprietary software; and proper faculty training. (LRW)

  14. Replica Approach for Minimal Investment Risk with Cost

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  15. Minimization of transmission cost in decentralized control systems

    NASA Technical Reports Server (NTRS)

    Wang, S.-H.; Davison, E. J.

    1978-01-01

    This paper considers the problem of stabilizing a linear time-invariant multivariable system by using local feedback controllers and some limited information exchange among local stations. The problem of achieving a given degree of stability with minimum transmission cost is solved.

  16. Computer Support for Conducting Supportability Trade-Offs in a Team Setting

    DTIC Science & Technology

    1990-01-01

    maintenance visits, and spares costs. To minimize the total system LCC, which includes both acquisition and support costs, a method for obtaining the...from different departments with a range of skills to work for a common goal is not an easy task. Ignoring the logistical concerns, a fundamental problem...maintenance visits, and spares costs. To minimize the total system LCC, which includes both acquisition and support costs, a method for obtaining the

  17. Drug Prohibition in the United States: Costs, Consequences, and Alternatives

    NASA Astrophysics Data System (ADS)

    Nadelmann, Ethan A.

    1989-09-01

    ``Drug legalization'' increasingly merits serious consideration as both an analytical model and a policy option for addressing the ``drug problem.'' Criminal justice approaches to the drug problem have proven limited in their capacity to curtail drug abuse. They also have proven increasingly costly and counterproductive. Drug legalization policies that are wisely implemented can minimize the risks of legalization, dramatically reduce the costs of current policies, and directly address the problems of drug abuse.

  18. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  19. Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.

    PubMed

    Ifrim, Sandra

    2015-12-01

    The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.

  20. Intercell scheduling: A negotiation approach using multi-agent coalitions

    NASA Astrophysics Data System (ADS)

    Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde

    2016-10-01

    Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

  1. Permanent magnet design for magnetic heat pumps using total cost minimization

    NASA Astrophysics Data System (ADS)

    Teyber, R.; Trevizoli, P. V.; Christiaanse, T. V.; Govindappa, P.; Niknia, I.; Rowe, A.

    2017-11-01

    The active magnetic regenerator (AMR) is an attractive technology for efficient heat pumps and cooling systems. The costs associated with a permanent magnet for near room temperature applications are a central issue which must be solved for broad market implementation. To address this problem, we present a permanent magnet topology optimization to minimize the total cost of cooling using a thermoeconomic cost-rate balance coupled with an AMR model. A genetic algorithm identifies cost-minimizing magnet topologies. For a fixed temperature span of 15 K and 4.2 kg of gadolinium, the optimal magnet configuration provides 3.3 kW of cooling power with a second law efficiency (ηII) of 0.33 using 16.3 kg of permanent magnet material.

  2. A duality framework for stochastic optimal control of complex systems

    DOE PAGES

    Malikopoulos, Andreas A.

    2016-01-01

    In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derivemore » online the optimal control policy in complex systems.« less

  3. Optimal design of the satellite constellation arrangement reconfiguration process

    NASA Astrophysics Data System (ADS)

    Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid

    2016-08-01

    In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.

  4. A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction

    PubMed Central

    Fu, Fang; Zhang, Tao

    2016-01-01

    A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939

  5. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  6. Cost Optimization Model for Business Applications in Virtualized Grid Environments

    NASA Astrophysics Data System (ADS)

    Strebel, Jörg

    The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.

  7. Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions.

    PubMed

    Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr

    In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.

  8. Improving Learning Performance Through Rational Resource Allocation

    NASA Technical Reports Server (NTRS)

    Gratch, J.; Chien, S.; DeJong, G.

    1994-01-01

    This article shows how rational analysis can be used to minimize learning cost for a general class of statistical learning problems. We discuss the factors that influence learning cost and show that the problem of efficient learning can be cast as a resource optimization problem. Solutions found in this way can be significantly more efficient than the best solutions that do not account for these factors. We introduce a heuristic learning algorithm that approximately solves this optimization problem and document its performance improvements on synthetic and real-world problems.

  9. The Integration of Production-Distribution on Newspapers Supply Chain for Cost Minimization using Analytic Models: Case Study

    NASA Astrophysics Data System (ADS)

    Febriana Aqidawati, Era; Sutopo, Wahyudi; Hisjam, Muh.

    2018-03-01

    Newspapers are products with special characteristics which are perishable, have a shorter range of time between the production and distribution, zero inventory, and decreasing sales value along with increasing in time. Generally, the problem of production and distribution in the paper supply chain is the integration of production planning and distribution to minimize the total cost. The approach used in this article to solve the problem is using an analytical model. In this article, several parameters and constraints have been considered in the calculation of the total cost of the integration of production and distribution of newspapers during the determined time horizon. This model can be used by production and marketing managers as decision support in determining the optimal quantity of production and distribution in order to obtain minimum cost so that company's competitiveness level can be increased.

  10. Investigating the linearity assumption between lumber grade mix and yield using design of experiments (DOE)

    Treesearch

    Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas

    2004-01-01

    Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...

  11. Optimization of Highway Work Zone Decisions Considering Short-Term and Long-Term Impacts

    DTIC Science & Technology

    2010-01-01

    strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature of this optimization problem, a heuristic...combination of lane closure and traffic control strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature ...zone) NV # the number of vehicle classes NPV $ Net Present Value p’(t) % Adjusted traffic diversion rate at time t p(t) % Natural diversion rate

  12. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  13. Developing a feasible neighbourhood search for solving hub location problem in a communication network

    NASA Astrophysics Data System (ADS)

    Rakhmawati, Fibri; Mawengkang, Herman; Buulolo, F.; Mardiningsih

    2018-01-01

    The hub location with single assignment is the problem of locating hubs and assigning the terminal nodes to hubs in order to minimize the cost of hub installation and the cost of routing the traffic in the network. There may also be capacity restrictions on the amount of traffic that can transit by hubs. This paper discusses how to model the polyhedral properties of the problems and develop a feasible neighbourhood search method to solve the model.

  14. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiao; Dong, Jin; Djouadi, Seddik M

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, wheremore » the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.« less

  15. Thermal energy storage to minimize cost and improve efficiency of a polygeneration district energy system in a real-time electricity market

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Kody M.; Kim, Jong Suk; Cole, Wesley J.

    2016-10-01

    District energy systems can produce low-cost utilities for large energy networks, but can also be a resource for the electric grid by their ability to ramp production or to store thermal energy by responding to real-time market signals. In this work, dynamic optimization exploits the flexibility of thermal energy storage by determining optimal times to store and extract excess energy. This concept is applied to a polygeneration distributed energy system with combined heat and power, district heating, district cooling, and chilled water thermal energy storage. The system is a university campus responsible for meeting the energy needs of tens ofmore » thousands of people. The objective for the dynamic optimization problem is to minimize cost over a 24-h period while meeting multiple loads in real time. The paper presents a novel algorithm to solve this dynamic optimization problem with energy storage by decomposing the problem into multiple static mixed-integer nonlinear programming (MINLP) problems. Another innovative feature of this work is the study of a large, complex energy network which includes the interrelations of a wide variety of energy technologies. Results indicate that a cost savings of 16.5% is realized when the system can participate in the wholesale electricity market.« less

  16. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  17. Financial Resource Allocation in Higher Education

    ERIC Educational Resources Information Center

    Ušpuriene, Ana; Sakalauskas, Leonidas; Dumskis, Valerijonas

    2017-01-01

    The paper considers a problem of financial resource allocation in a higher education institution. The basic financial management instruments and the multi-stage cost minimization model created are described involving financial instruments to constraints. Both societal and institutional factors that determine the costs of educating students are…

  18. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  19. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  20. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  1. Beyond Hosting Capacity: Using Shortest Path Methods to Minimize Upgrade Cost Pathways: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensollen, Nicolas; Horowitz, Kelsey A; Palmintier, Bryan S

    We present in this paper a graph based forwardlooking algorithm applied to distribution planning in the context of distributed PV penetration. We study the target hosting capacity (THC) problem where the objective is to find the cheapest sequence of system upgrades to reach a predefined hosting capacity target value. We show in this paper that commonly used short-term cost minimization approaches lead most of the time to suboptimal solutions. By comparing our method against such myopic techniques on real distribution systems, we show that our algorithm is able to reduce the overall integration costs by looking at future decisions. Becausemore » hosting capacity is hard to compute, this problem requires efficient methods to search the space. We demonstrate here that heuristics using domain specific knowledge can be efficiently used to improve the algorithm performance such that real distribution systems can be studied.« less

  2. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  3. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization.

    PubMed

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-11-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified.

  4. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    NASA Astrophysics Data System (ADS)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  5. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  6. The environmental cost of subsistence: Optimizing diets to minimize footprints.

    PubMed

    Gephart, Jessica A; Davis, Kyle F; Emery, Kyle A; Leach, Allison M; Galloway, James N; Pace, Michael L

    2016-05-15

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result, this study provides a useful tool for decision-makers in linking human nutrition and environmental impacts. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  8. Graphical approach for multiple values logic minimization

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  9. Life cycle costing with a discount rate

    NASA Technical Reports Server (NTRS)

    Posner, E. C.

    1978-01-01

    This article studies life cycle costing for a capability needed for the indefinite future, and specifically investigates the dependence of optimal policies on the discount rate chosen. The two costs considered are reprocurement cost and maintenance and operations (M and O) cost. The procurement price is assumed known, and the M and O costs are assumed to be a known function, in fact, a non-decreasing function, of the time since last reprocurement. The problem is to choose the optimum reprocurement time so as to minimize the quotient of the total cost over a reprocurement period divided by the period. Or one could assume a discount rate and try to minimize the total discounted costs into the indefinite future. It is shown that the optimum policy in the presence of a small discount rate hardly depends on the discount rate at all, and leads to essentially the same policy as in the case in which discounting is not considered.

  10. A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Rosu, Serban

    2011-09-01

    In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.

  11. Solution Concepts for Distributed Decision-Making without Coordination

    NASA Technical Reports Server (NTRS)

    Beling, Peter A.; Patek, Stephen D.

    2005-01-01

    Consider a single-stage problem in which we have a group N agents who are attempting to minimize the expected cost of their joint actions, without the benefit of communication or a pre-established protocol but with complete knowledge of the expected cost of any joint set of actions for the group. We call this situation a static coordination problem. The central issue in defining an appropriate solution concept for static coordination problems is considering how to deal with the fact that if the agents axe faced with a set of multiple (mixed) strategies that are equally attractive in terms of cost, a failure of coordination may lead to an expected cost value that is worse than that of any of the strategies in the set. In this proposal, we describe the notion of a general coordination problem, describe initial efforts at developing a solution concept for static coordination problems, and then outline a research agenda that centers on activities that will be basis for obtaining a complete understanding of solutions to static coordination problems.

  12. Routing and Scheduling Optimization Model of Sea Transportation

    NASA Astrophysics Data System (ADS)

    barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman

    2018-01-01

    This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.

  13. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  14. Software Process Improvement through the Removal of Project-Level Knowledge Flow Obstacles: The Perceptions of Software Engineers

    ERIC Educational Resources Information Center

    Mitchell, Susan Marie

    2012-01-01

    Uncontrollable costs, schedule overruns, and poor end product quality continue to plague the software engineering field. Innovations formulated with the expectation to minimize or eliminate cost, schedule, and quality problems have generally fallen into one of three categories: programming paradigms, software tools, and software process…

  15. TTSA: An Effective Scheduling Approach for Delay Bounded Tasks in Hybrid Clouds.

    PubMed

    Yuan, Haitao; Bi, Jing; Tan, Wei; Zhou, MengChu; Li, Bo Hu; Li, Jianqiang

    2017-11-01

    The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.

  16. Optimizing conjunctive use of surface water and groundwater resources with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2014-05-01

    Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.

  17. Optimal Inspection of Imports to Prevent Invasive Pest Introduction.

    PubMed

    Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G

    2018-03-01

    The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.

  18. Adoption of waste minimization technology to benefit electroplaters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, E.M.K.; Li, C.P.H.; Yu, C.M.K.

    Because of increasingly stringent environmental legislation and enhanced environmental awareness, electroplaters in Hong Kong are paying more heed to protect the environment. To comply with the array of environmental controls, electroplaters can no longer rely solely on the end-of-pipe approach as a means for abating their pollution problems under the particular local industrial environment. The preferred approach is to adopt waste minimization measures that yield both economic and environmental benefits. This paper gives an overview of electroplating activities in Hong Kong, highlights their characteristics, and describes the pollution problems associated with conventional electroplating operations. The constraints of using pollution controlmore » measures to achieve regulatory compliance are also discussed. Examples and case studies are given on some low-cost waste minimization techniques readily available to electroplaters, including dragout minimization and water conservation techniques. Recommendations are given as to how electroplaters can adopt and exercise waste minimization techniques in their operations. 1 tab.« less

  19. Active control of panel vibrations induced by boundary-layer flow

    NASA Technical Reports Server (NTRS)

    Chow, Pao-Liu

    1991-01-01

    Some problems in active control of panel vibration excited by a boundary layer flow over a flat plate are studied. In the first phase of the study, the optimal control problem of vibrating elastic panel induced by a fluid dynamical loading was studied. For a simply supported rectangular plate, the vibration control problem can be analyzed by a modal analysis. The control objective is to minimize the total cost functional, which is the sum of a vibrational energy and the control cost. By means of the modal expansion, the dynamical equation for the plate and the cost functional are reduced to a system of ordinary differential equations and the cost functions for the modes. For the linear elastic plate, the modes become uncoupled. The control of each modal amplitude reduces to the so-called linear regulator problem in control theory. Such problems can then be solved by the method of adjoint state. The optimality system of equations was solved numerically by a shooting method. The results are summarized.

  20. A multi-objective genetic algorithm for a mixed-model assembly U-line balancing type-I problem considering human-related issues, training, and learning

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed

    2016-12-01

    Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.

  1. Assessment of metal ion concentration in water with structured feature selection.

    PubMed

    Naula, Pekka; Airola, Antti; Pihlasalo, Sari; Montoya Perez, Ileana; Salakoski, Tapio; Pahikkala, Tapio

    2017-10-01

    We propose a cost-effective system for the determination of metal ion concentration in water, addressing a central issue in water resources management. The system combines novel luminometric label array technology with a machine learning algorithm that selects a minimal number of array reagents (modulators) and liquid sample dilutions, such that enable accurate quantification. The algorithm is able to identify the optimal modulators and sample dilutions leading to cost reductions since less manual labour and resources are needed. Inferring the ion detector involves a unique type of a structured feature selection problem, which we formalize in this paper. We propose a novel Cartesian greedy forward feature selection algorithm for solving the problem. The novel algorithm was evaluated in the concentration assessment of five metal ions and the performance was compared to two known feature selection approaches. The results demonstrate that the proposed system can assist in lowering the costs with minimal loss in accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  3. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  4. Zero point and zero suffix methods with robust ranking for solving fully fuzzy transportation problems

    NASA Astrophysics Data System (ADS)

    Ngastiti, P. T. B.; Surarso, Bayu; Sutimin

    2018-05-01

    Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.

  5. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  6. Cost minimizing of cutting process for CNC thermal and water-jet machines

    NASA Astrophysics Data System (ADS)

    Tavaeva, Anastasia; Kurennov, Dmitry

    2015-11-01

    This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.

  7. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  8. Approaches to eliminate waste and reduce cost for recycling glass.

    PubMed

    Chao, Chien-Wen; Liao, Ching-Jong

    2011-12-01

    In recent years, the issue of environmental protection has received considerable attention. This paper adds to the literature by investigating a scheduling problem in the manufacturing of a glass recycling factory in Taiwan. The objective is to minimize the sum of the total holding cost and loss cost. We first represent the problem as an integer programming (IP) model, and then develop two heuristics based on the IP model to find near-optimal solutions for the problem. To validate the proposed heuristics, comparisons between optimal solutions from the IP model and solutions from the current method are conducted. The comparisons involve two problem sizes, small and large, where the small problems range from 15 to 45 jobs, and the large problems from 50 to 100 jobs. Finally, a genetic algorithm is applied to evaluate the proposed heuristics. Computational experiments show that the proposed heuristics can find good solutions in a reasonable time for the considered problem. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization

    PubMed Central

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-01-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell’s capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified. PMID:27812109

  10. Current treatment of gram-positive infections: focus on efficacy, safety, and cost minimalization analysis of teicoplanin.

    PubMed

    Crane, V S; Garabedian-Ruffalo, S M

    1992-12-01

    The current health care environment has had a significant impact on hospital Pharmacy and Therapeutics Committee formulary decisions. In evaluating a new therapy for formulary inclusion, a cost savings along with equivalent or an improvement in patient care and safety is optimal. Teicoplanin is an investigational glycopeptide antimicrobial agent with a spectrum of activity similar to vancomycin. Unlike vancomycin, however, teicoplanin has a long elimination half-life permitting administration once daily, and is well tolerated when given intramuscularly. In addition, teicoplanin is associated with a favorable safety profile. Red man syndrome does not appear to be a significant clinical problem. Results of our cost minimalization analysis using the average acquisition costs of vancomycin revealed that teicoplanin (400 mg), at an average acquisition cost of less than $28.46 when administered intravenously and $30.93 when administered intramuscularly, offers a clinically efficacious, safe, and less expensive alternative to vancomycin therapy.

  11. Application of multi-objective optimization to pooled experiments of next generation sequencing for detection of rare mutations.

    PubMed

    Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario

    2014-01-01

    In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.

  12. Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness

    NASA Astrophysics Data System (ADS)

    Julich, R. J.

    2004-05-01

    The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.

  13. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  14. Efficient data communication protocols for wireless networks

    NASA Astrophysics Data System (ADS)

    Zeydan, Engin

    In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.

  15. Lumber Cost Minimization through Optimum Grade-Mix Selection

    Treesearch

    Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas; R. Edward Thomas

    2003-01-01

    Rough mills process kiln-dried lumber into components for the furniture and wood products industries, Lumber is a significant portion of total rough mill costs and lumber quality can have a serious impact on mill productivity. Lower quality lumber is less expensive yet is harder to process. Higher quality lumber is more expensive yet easier to process. The problem of...

  16. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  17. A perverse quality incentive in surgery: implications of reimbursing surgeons less for doing laparoscopic surgery.

    PubMed

    Fader, Amanda N; Xu, Tim; Dunkin, Brian J; Makary, Martin A

    2016-11-01

    Surgery is one of the highest priced services in health care, and complications from surgery can be serious and costly. Recently, advances in surgical techniques have allowed surgeons to perform many common operations using minimally invasive methods that result in fewer complications. Despite this, the rates of open surgery remain high across multiple surgical disciplines. This is an expert commentary and review of the contemporary literature regarding minimally invasive surgery practices nationwide, the benefits of less invasive approaches, and how minimally invasive compared with open procedures are differentially reimbursed in the United States. We explore the incentive of the current surgeon reimbursement fee schedule and its potential implications. A surgeon's preference to perform minimally invasive compared with open surgery remains highly variable in the U.S., even after adjustment for patient comorbidities and surgical complexity. Nationwide administrative claims data across several surgical disciplines demonstrates that minimally invasive surgery utilization in place of open surgery is associated with reduced adverse events and cost savings. Reducing surgical complications by increasing adoption of minimally invasive operations has significant cost implications for health care. However, current U.S. payment structures may perversely incentivize open surgery and financially reward physicians who do not necessarily embrace newer or best minimally invasive surgery practices. Utilization of minimally invasive surgery varies considerably in the U.S., representing one of the greatest disparities in health care. Existing physician payment models must translate the growing body of research in surgical care into physician-level rewards for quality, including choice of operation. Promoting safe surgery should be an important component of a strong, value-based healthcare system. Resolving the potentially perverse incentives in paying for surgical approaches may help address disparities in surgical care, reduce the prevalent problem of variation, and help contain health care costs.

  18. Predicting camber, deflection, and prestress losses in prestressed concrete members.

    DOT National Transportation Integrated Search

    2011-07-01

    Accurate predictions of camber and prestress losses for prestressed concrete bridge girders are essential to minimizing the frequency and cost of construction problems. The time-dependent nature of prestress losses, variable concrete properties, and ...

  19. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  20. An Integer Programming Model For Solving Heterogeneous Vehicle Routing Problem With Hard Time Window considering Service Choice

    NASA Astrophysics Data System (ADS)

    Susilawati, Enny; Mawengkang, Herman; Efendi, Syahril

    2018-01-01

    Generally a Vehicle Routing Problem with time windows (VRPTW) can be defined as a problem to determine the optimal set of routes used by a fleet of vehicles to serve a given set of customers with service time restrictions; the objective is to minimize the total travel cost (related to the travel times or distances) and operational cost (related to the number of vehicles used). In this paper we address a variant of the VRPTW in which the fleet of vehicle is heterogenic due to the different size of demand from customers. The problem, called Heterogeneous VRP (HVRP) also includes service levels. We use integer programming model to describe the problem. A feasible neighbourhood approach is proposed to solve the model.

  1. An Interactive Life Cycle Cost Forecasting Tool

    DTIC Science & Technology

    1990-03-01

    of Phase in period PO - Length of Phase out period PV - Present value viii AFIT/GOR/ENS/90M-17 Abstract A tool was developed for Monte Carlo...and B. Note that this is for a given configuration. The E represents effectiveness and is equated to some function of the quantity of systems A and B...purchased. Either strategy, maximizing effectiveness or minimizing cost, leads to some type of cost comparison among the proposed systems. The problem

  2. Scaling and long-range dependence in option pricing V: Multiscaling hedging and implied volatility smiles under the fractional Black-Scholes model with transaction costs

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Tian

    2011-05-01

    This paper deals with the problem of discrete time option pricing using the fractional Black-Scholes model with transaction costs. Through the ‘anchoring and adjustment’ argument in a discrete time setting, a European call option pricing formula is obtained. The minimal price of an option under transaction costs is obtained. In addition, the relation between scaling and implied volatility smiles is discussed.

  3. A multi-objective optimization model for hub network design under uncertainty: An inexact rough-interval fuzzy approach

    NASA Astrophysics Data System (ADS)

    Niakan, F.; Vahdani, B.; Mohammadi, M.

    2015-12-01

    This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.

  4. Traveling salesman problem with a center.

    PubMed

    Lipowski, Adam; Lipowska, Dorota

    2005-06-01

    We study a traveling salesman problem where the path is optimized with a cost function that includes its length L as well as a certain measure C of its distance from the geometrical center of the graph. Using simulated annealing (SA) we show that such a problem has a transition point that separates two phases differing in the scaling behavior of L and C, in efficiency of SA, and in the shape of minimal paths.

  5. Clustering Qualitative Data Based on Binary Equivalence Relations: Neighborhood Search Heuristics for the Clique Partitioning Problem

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich

    2009-01-01

    The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal…

  6. Oral health care for pregnant and postpartum women.

    PubMed

    Goldie, M Perno

    2003-08-01

    Pregnancy may pose a number of concerns to the mother and the foetus. This can include systemic and oral issues that effect health. Transmission of caries-causing bacteria is one problem that can be minimized by utilizing simple, cost-effective measures. Chlorhexidine rinses and xylitol containing chewing gum will be discussed as possible solutions to this tremendous public health problem.

  7. Unifying cost and information in information-theoretic competitive learning.

    PubMed

    Kamimura, Ryotaro

    2005-01-01

    In this paper, we introduce costs into the framework of information maximization and try to maximize the ratio of information to its associated cost. We have shown that competitive learning is realized by maximizing mutual information between input patterns and competitive units. One shortcoming of the method is that maximizing information does not necessarily produce representations faithful to input patterns. Information maximizing primarily focuses on some parts of input patterns that are used to distinguish between patterns. Therefore, we introduce the cost, which represents average distance between input patterns and connection weights. By minimizing the cost, final connection weights reflect input patterns well. We applied the method to a political data analysis, a voting attitude problem and a Wisconsin cancer problem. Experimental results confirmed that, when the cost was introduced, representations faithful to input patterns were obtained. In addition, improved generalization performance was obtained within a relatively short learning time.

  8. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  9. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  10. Demulsification key to production efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svetgoff, J.A.

    1988-08-01

    Concern over the declining profitability in the petroleum industry has generated renewed interest in reducing costs and enhancing profits. This article discusses one are often overlooked when trying to optimize profits, the process of demulsification. Resolving crude oil emulsions is a costly operational problem in most producing fields. Because it is one of the least understood facets of the petroleum industry, the costs associated with demulsification are often excessive. Although there are many similarities, desalting is a separate subject from demulsification. The removal of produced water from crude oil is the primary goal of demulsification, while minimizing the salt contentmore » in crude oil is the object of a desalting program. Understanding demulsification and desalting concepts is important to design engineers. The author discusses how this knowledge enables them to design systems that minimize operating costs while meeting present, as well as future, needs.« less

  11. Exploring the Pareto frontier using multisexual evolutionary algorithms: an application to a flexible manufacturing problem

    NASA Astrophysics Data System (ADS)

    Bonissone, Stefano R.; Subbu, Raj

    2002-12-01

    In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.

  12. Management of a stage-structured insect pest: an application of approximate optimization.

    PubMed

    Hackett, Sean C; Bonsall, Michael B

    2018-06-01

    Ecological decision problems frequently require the optimization of a sequence of actions over time where actions may have both immediate and downstream effects. Dynamic programming can solve such problems only if the dimensionality is sufficiently low. Approximate dynamic programming (ADP) provides a suite of methods applicable to problems of arbitrary complexity at the expense of guaranteed optimality. The most easily generalized method is the look-ahead policy: a brute-force algorithm that identifies reasonable actions by constructing and solving a series of temporally truncated approximations of the full problem over a defined planning horizon. We develop and apply this approach to a pest management problem inspired by the Mediterranean fruit fly, Ceratitis capitata. The model aims to minimize the cumulative costs of management actions and medfly-induced losses over a single 16-week season. The medfly population is stage-structured and grows continuously while management decisions are made at discrete, weekly intervals. For each week, the model chooses between inaction, insecticide application, or one of six sterile insect release ratios. Look-ahead policy performance is evaluated over a range of planning horizons, two levels of crop susceptibility to medfly and three levels of pesticide persistence. In all cases, the actions proposed by the look-ahead policy are contrasted to those of a myopic policy that minimizes costs over only the current week. We find that look-ahead policies always out-performed a myopic policy and decision quality is sensitive to the temporal distribution of costs relative to the planning horizon: it is beneficial to extend the planning horizon when it excludes pertinent costs. However, longer planning horizons may reduce decision quality when major costs are resolved imminently. ADP methods such as the look-ahead-policy-based approach developed here render questions intractable to dynamic programming amenable to inference but should be applied carefully as their flexibility comes at the expense of guaranteed optimality. However, given the complexity of many ecological management problems, the capacity to propose a strategy that is "good enough" using a more representative problem formulation may be preferable to an optimal strategy derived from a simplified model. © 2018 by the Ecological Society of America.

  13. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  14. Optimum Tolerance Design Using Component-Amount and Mixture-Amount Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Ozler, Cenk; Sehirlioglu, Ali Kemal

    2013-08-01

    One type of tolerance design problem involves optimizing component and assembly tolerances to minimize the total cost (sum of manufacturing cost and quality loss). Previous literature recommended using traditional response surface (RS) designs and models to solve this type of tolerance design problem. In this article, component-amount (CA) and mixture-amount (MA) approaches are proposed as more appropriate for solving this type of tolerance design problem. The advantages of the CA and MA approaches over the RS approach are discussed. Reasons for choosing between the CA and MA approaches are also discussed. The CA and MA approaches (experimental design, response modeling,more » and optimization) are illustrated using real examples.« less

  15. Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting

    NASA Astrophysics Data System (ADS)

    Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt

    2018-04-01

    We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.

  16. The Time Window Vehicle Routing Problem Considering Closed Route

    NASA Astrophysics Data System (ADS)

    Irsa Syahputri, Nenna; Mawengkang, Herman

    2017-12-01

    The Vehicle Routing Problem (VRP) determines the optimal set of routes used by a fleet of vehicles to serve a given set of customers on a predefined graph; the objective is to minimize the total travel cost (related to the travel times or distances) and operational cost (related to the number of vehicles used). In this paper we study a variant of the predefined graph: given a weighted graph G and vertices a and b, and given a set X of closed paths in G, find the minimum total travel cost of a-b path P such that no path in X is a subpath of P. Path P is allowed to repeat vertices and edges. We use integer programming model to describe the problem. A feasible neighbourhood approach is proposed to solve the model

  17. Coping with coal quality impacts on power plant operation and maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatt, R.

    1998-12-31

    The electric power industry is rapidly changing due to deregulation. The author was present one hot day in June of this year, when a southeastern utility company was selling electricity for $5,000.00 per megawatt with $85.00 cost. Typical power cost range from the mid teens at night to about $30.00 on a normal day. The free market place will challenge the power industry in many ways. Fuel is the major cost in electric power. In a regulated industry the cost of fuel was passed on to the customers. Fuels were chosen to minimize problems such as handling, combustion, ash depositsmore » and other operational and maintenance concerns. Tight specifications were used to eliminate or minimize coals that caused problems. These tight specifications raised the price of fuel by minimizing competition. As the power stations become individual profit centers, plant management must take a more proactive role in fuel selection. Understanding how coal quality impacts plant performance and cost, allows better fuel selection decisions. How well plants take advantage of their knowledge may determine whether they will be able to compete in a free market place. The coal industry itself can provide many insights on how to survive in this type of market. Coal mines today must remain competitive or be shut down. The consolidation of the coal industry indicates the trends that can occur in a competitive market. These trends have already started, and will continue in the utility industry. This paper will discuss several common situations concerning coal quality and potential solutions for the plant to consider. All these examples have mill maintenance and performance issues in common. This is indicative of how important pulverizers are to the successful operation of a power plant.« less

  18. The Development of Patient Scheduling Groups for an Effective Appointment System

    PubMed Central

    2016-01-01

    Summary Background Patient access to care and long wait times has been identified as major problems in outpatient delivery systems. These aspects impact medical staff productivity, service quality, clinic efficiency, and health-care cost. Objectives This study proposed to redesign existing patient types into scheduling groups so that the total cost of clinic flow and scheduling flexibility was minimized. The optimal scheduling group aimed to improve clinic efficiency and accessibility. Methods The proposed approach used the simulation optimization technique and was demonstrated in a Primary Care physician clinic. Patient type included, emergency/urgent care (ER/UC), follow-up (FU), new patient (NP), office visit (OV), physical exam (PE), and well child care (WCC). One scheduling group was designed for this physician. The approach steps were to collect physician treatment time data for each patient type, form the possible scheduling groups, simulate daily clinic flow and patient appointment requests, calculate costs of clinic flow as well as appointment flexibility, and find the scheduling group that minimized the total cost. Results The cost of clinic flow was minimized at the scheduling group of four, an 8.3% reduction from the group of one. The four groups were: 1. WCC, 2. OV, 3. FU and ER/UC, and 4. PE and NP. The cost of flexibility was always minimized at the group of one. The total cost was minimized at the group of two. WCC was considered separate and the others were grouped together. The total cost reduction was 1.3% from the group of one. Conclusions This study provided an alternative method of redesigning patient scheduling groups to address the impact on both clinic flow and appointment accessibility. Balance between them ensured the feasibility to the recognized issues of patient service and access to care. The robustness of the proposed method on the changes of clinic conditions was also discussed. PMID:27081406

  19. Evaluation of consolidation problems in thicker Portland cement concrete pavements

    DOT National Transportation Integrated Search

    2003-08-01

    Minimizing the amount of entrapped air in concrete is necessary to produce quality concrete with a longer pavement performance life, lower maintenance costs and fewer delays to the roadway users. Good quality concrete with low entrapped air content w...

  20. Routing Algorithm based on Minimum Spanning Tree and Minimum Cost Flow for Hybrid Wireless-optical Broadband Access Network

    NASA Astrophysics Data System (ADS)

    Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen

    2012-03-01

    In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.

  1. Algorithms for routing vehicles and their application to the paratransit vehicle scheduling problem.

    DOT National Transportation Integrated Search

    2012-03-01

    As the demand for paratransit services increases, there is a constant pressure to maintain the quality of : service provided to the customers while minimizing the cost of operation; this is especially important as : the availability of public funding...

  2. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  3. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  4. Discrete homotopy analysis for optimal trading execution with nonlinear transient market impact

    NASA Astrophysics Data System (ADS)

    Curato, Gianbiagio; Gatheral, Jim; Lillo, Fabrizio

    2016-10-01

    Optimal execution in financial markets is the problem of how to trade a large quantity of shares incrementally in time in order to minimize the expected cost. In this paper, we study the problem of the optimal execution in the presence of nonlinear transient market impact. Mathematically such problem is equivalent to solve a strongly nonlinear integral equation, which in our model is a weakly singular Urysohn equation of the first kind. We propose an approach based on Homotopy Analysis Method (HAM), whereby a well behaved initial trading strategy is continuously deformed to lower the expected execution cost. Specifically, we propose a discrete version of the HAM, i.e. the DHAM approach, in order to use the method when the integrals to compute have no closed form solution. We find that the optimal solution is front loaded for concave instantaneous impact even when the investor is risk neutral. More important we find that the expected cost of the DHAM strategy is significantly smaller than the cost of conventional strategies.

  5. A study of commuter airplane design optimization

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Wyatt, R. D.; Griswold, D. A.; Hammer, J. L.

    1977-01-01

    Problems of commuter airplane configuration design were studied to affect a minimization of direct operating costs. Factors considered were the minimization of fuselage drag, methods of wing design, and the estimated drag of an airplane submerged in a propellor slipstream; all design criteria were studied under a set of fixed performance, mission, and stability constraints. Configuration design data were assembled for application by a computerized design methodology program similar to the NASA-Ames General Aviation Synthesis Program.

  6. Classical statistical mechanics approach to multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Marzolino, U.; Parisi, G.; Pascazio, S.

    2010-06-01

    We characterize the multipartite entanglement of a system of n qubits in terms of the distribution function of the bipartite purity over balanced bipartitions. We search for maximally multipartite entangled states, whose average purity is minimal, and recast this optimization problem into a problem of statistical mechanics, by introducing a cost function, a fictitious temperature and a partition function. By investigating the high-temperature expansion, we obtain the first three moments of the distribution. We find that the problem exhibits frustration.

  7. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  8. Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mainemer, C. I.

    1978-01-01

    The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.

  9. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  10. Effectiveness and Cost-effectiveness of Opportunistic Screening and Stepped-care Interventions for Older Alcohol Users in Primary Care.

    PubMed

    Coulton, Simon; Bland, Martin; Crosby, Helen; Dale, Veronica; Drummond, Colin; Godfrey, Christine; Kaner, Eileen; Sweetman, Jennifer; McGovern, Ruth; Newbury-Birch, Dorothy; Parrott, Steve; Tober, Gillian; Watson, Judith; Wu, Qi

    2017-11-01

    To compare the clinical effectiveness and cost-effectiveness of a stepped-care intervention versus a minimal intervention for the treatment of older hazardous alcohol users in primary care. Multi-centre, pragmatic RCT, set in Primary Care in UK. Patients aged ≥ 55 years scoring ≥ 8 on the Alcohol Use Disorders Identification Test were allocated either to 5-min of brief advice or to 'Stepped Care': an initial 20-min of behavioural change counselling, with Step 2 being three sessions of Motivational Enhancement Therapy and Step 3 referral to local alcohol services (progression between each Step being determined by outcomes 1 month after each Step). Outcome measures included average drinks per day, AUDIT-C, alcohol-related problems using the Drinking Problems Index, health-related quality of life using the Short Form 12, costs measured from a NHS/Personal Social Care perspective and estimated health gains in quality adjusted life-years measured assessed EQ-5D. Both groups reduced alcohol consumption at 12 months but the difference between groups was small and not significant. No significant differences were observed between the groups on secondary outcomes. In economic terms stepped care was less costly and more effective than the minimal intervention. Stepped care does not confer an advantage over a minimal intervention in terms of reduction in alcohol use for older hazardous alcohol users in primary care. However, stepped care has a greater probability of being more cost-effective. Current controlled trials ISRCTN52557360. A stepped care approach was compared with brief intervention for older at-risk drinkers attending primary care. While consumption reduced in both groups over 12 months there was no significant difference between the groups. An economic analysis indicated the stepped care which had a greater probability of being more cost-effective than brief intervention. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  11. POWERING AIRPOWER: IS THE AIR FORCES ENERGY SECURE

    DTIC Science & Technology

    2016-02-01

    needs. More on-site renewable energy generation increases AF readiness in crisis times by minimizing the AF’s dependency on fossil fuels. Financing...reducing the need for traditional fossil fuels, and the high investment cost of onsite renewable energy sources is still a serious roadblock in this...help installations better plan holistically. This research will take the form of problem/solution framework. With any complex problem, rarely does a

  12. Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment

    PubMed Central

    Wan, Long

    2014-01-01

    We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861

  13. Dynamic Network Formation Using Ant Colony Optimization

    DTIC Science & Technology

    2009-03-01

    backhauls, VRP with pick-up and delivery, VRP with satellite facilities, and VRP with time windows (Murata & Itai , 2005). The general vehicle...given route is only visited once. The objective of the basic problem is to minimize a total cost as follows (Murata & Itai , 2005): M m mc 1 min...Problem based on Ant Colony System. Second Internation Workshop on Freight Transportation and Logistics. Palermo, Italy. Murata, T., & Itai , R. (2005

  14. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  15. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  16. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  17. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  18. Cellular Manufacturing System with Dynamic Lot Size Material Handling

    NASA Astrophysics Data System (ADS)

    Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.

    2016-02-01

    Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.

  19. Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines

    PubMed Central

    Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing

    2014-01-01

    m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933

  20. Site Partitioning for Redundant Arrays of Distributed Disks

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1996-01-01

    Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.

  1. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  2. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.

  3. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    NASA Astrophysics Data System (ADS)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  4. On the Run-Time Optimization of the Boolean Logic of a Program.

    ERIC Educational Resources Information Center

    Cadolino, C.; Guazzo, M.

    1982-01-01

    Considers problem of optimal scheduling of Boolean expression (each Boolean variable represents binary outcome of program module) on single-processor system. Optimization discussed consists of finding operand arrangement that minimizes average execution costs representing consumption of resources (elapsed time, main memory, number of…

  5. Macrame and Tin Cans

    ERIC Educational Resources Information Center

    Johnson, James K.

    1978-01-01

    If your school district is like most school districts, money for supplies is an increasing problem. Art teachers are continually trying to develop meaningful art projects for which the cost is minimal. Here students learn to use discarded, large number 10 cans as planters for a macrame project. (Author/RK)

  6. Application of the artificial bee colony algorithm for solving the set covering problem.

    PubMed

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.

  7. Application of the Artificial Bee Colony Algorithm for Solving the Set Covering Problem

    PubMed Central

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem. PMID:24883356

  8. Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines

    NASA Astrophysics Data System (ADS)

    Govindaraju, Parithi

    A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.

  9. Optimality Principles for Model-Based Prediction of Human Gait

    PubMed Central

    Ackermann, Marko; van den Bogert, Antonie J.

    2010-01-01

    Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient’s gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait. PMID:20074736

  10. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  11. Terahertz NDE for Under Paint Corrosion Detection and Evaluation

    NASA Technical Reports Server (NTRS)

    Anastasi, Robert F.; Madaras, Eric I.

    2005-01-01

    Corrosion under paint is not visible until it has caused paint to blister, crack, or chip. If corrosion is allowed to continue then structural problems may develop. Identifying corrosion before it becomes visible would minimize repairs and costs and potential structural problems. Terahertz NDE imaging under paint for corrosion is being examined as a method to inspect for corrosion by examining the terahertz response to paint thickness and to surface roughness.

  12. Improved minimum cost and maximum power two stage genome-wide association study designs.

    PubMed

    Stanhope, Stephen A; Skol, Andrew D

    2012-01-01

    In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.

  13. Advisory Algorithm for Scheduling Open Sectors, Operating Positions, and Workstations

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Drew, Michael; Lai, Chok Fung; Bilimoria, Karl D.

    2012-01-01

    Air traffic controller supervisors configure available sector, operating position, and work-station resources to safely and efficiently control air traffic in a region of airspace. In this paper, an algorithm for assisting supervisors with this task is described and demonstrated on two sample problem instances. The algorithm produces configuration schedule advisories that minimize a cost. The cost is a weighted sum of two competing costs: one penalizing mismatches between configurations and predicted air traffic demand and another penalizing the effort associated with changing configurations. The problem considered by the algorithm is a shortest path problem that is solved with a dynamic programming value iteration algorithm. The cost function contains numerous parameters. Default values for most of these are suggested based on descriptions of air traffic control procedures and subject-matter expert feedback. The parameter determining the relative importance of the two competing costs is tuned by comparing historical configurations with corresponding algorithm advisories. Two sample problem instances for which appropriate configuration advisories are obvious were designed to illustrate characteristics of the algorithm. Results demonstrate how the algorithm suggests advisories that appropriately utilize changes in airspace configurations and changes in the number of operating positions allocated to each open sector. The results also demonstrate how the advisories suggest appropriate times for configuration changes.

  14. Minimization for conditional simulation: Relationship to optimal transport

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  15. Harmony search optimization algorithm for a novel transportation problem in a consolidation network

    NASA Astrophysics Data System (ADS)

    Davod Hosseini, Seyed; Akbarpour Shirazi, Mohsen; Taghi Fatemi Ghomi, Seyed Mohammad

    2014-11-01

    This article presents a new harmony search optimization algorithm to solve a novel integer programming model developed for a consolidation network. In this network, a set of vehicles is used to transport goods from suppliers to their corresponding customers via two transportation systems: direct shipment and milk run logistics. The objective of this problem is to minimize the total shipping cost in the network, so it tries to reduce the number of required vehicles using an efficient vehicle routing strategy in the solution approach. Solving several numerical examples confirms that the proposed solution approach based on the harmony search algorithm performs much better than CPLEX in reducing both the shipping cost in the network and computational time requirement, especially for realistic size problem instances.

  16. Social Return On Investment (SROI): Problems, solutions … and is SROI a good investment?

    PubMed

    Yates, Brian T; Marra, Mita

    2017-10-01

    The conclusion of this special issue on Social Return On Investment (SROI) begins with a summary of both advantages and problems of SROI, many of which were identified in preceding articles. We also offer potential solutions for some of these problems that can be derived from standard evaluation practices and that are becoming expected in SROIs that follow guidances from international SROI networks. A remaining concern about SROI is that we do not yet know if SROI itself adds sufficient benefit to programs to justify its cost. Two frameworks for this proposed metaevaluation of SROI are suggested, the first comparing benefits to costs summatively (the resource→outcome model). The second framework evaluates costs and benefits according to how much they contribute to or are caused by the different activities of SROI. This resource→activity→outcome model could enable outcomes of SROI to be maximized within resource constraints (such as budget and time limits) on SROI. Alternatively, information from this model could help minimize the costs of achieving a specific level of return on investment from conducting SROI. Possible problems with this metaevaluation of SROI are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Fuzzy Multi-Objective Vendor Selection Problem with Modified S-CURVE Membership Function

    NASA Astrophysics Data System (ADS)

    Díaz-Madroñero, Manuel; Peidro, David; Vasant, Pandian

    2010-06-01

    In this paper, the S-Curve membership function methodology is used in a vendor selection (VS) problem. An interactive method for solving multi-objective VS problems with fuzzy goals is developed. The proposed method attempts simultaneously to minimize the total order costs, the number of rejected items and the number of late delivered items with reference to several constraints such as meeting buyers' demand, vendors' capacity, vendors' quota flexibility, vendors' allocated budget, etc. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in VS problems, with linear membership functions.

  18. Make or buy analysis model based on tolerance allocation to minimize manufacturing cost and fuzzy quality loss

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.

    2016-02-01

    The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.

  19. Cost effective campaigning in social networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2016-05-01

    Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.

  20. Water and wastewater minimization plan in food industries.

    PubMed

    Ganjidoust, H; Ayati, B

    2002-01-01

    Iran is one of the countries located in a dry and semi-dry area. Many provinces like Tehran are facing problems in recent years because of less precipitation. For reduction in wastewater treatment cost and water consumption, many research works have been carried out. One of them concerns food industries group, which consumes a great amount of water in different units. For example, in beverage industries, washing of glass bottles seven times requires large amounts of water but use of plastic bottles can reduce water consumption. Another problem is leakage from pipelines, valves, etc. Their repair plays an important role in the wastage of water. The non-polluted wasted water can be used in washing halls, watering green yards, recycling to the process or reusing in cooling towers. In this paper, after a short review of waste minimization plans in food industries, problems concerning water consuming and wastewater producing units in three Iranian food industries have been investigated. At the end, some suggestions have been given for implementing the water and wastewater minimization plan in the companies.

  1. Essays on wholesale auctions in deregulated electricity markets

    NASA Astrophysics Data System (ADS)

    Baltaduonis, Rimvydas

    2007-12-01

    The early experience in the restructured electric power markets raised several issues, including price spikes, inefficiency, security, and the overall relationship of market clearing prices to generation costs. Unsatisfactory outcomes in these markets are thought to have resulted in part from strategic generator behaviors encouraged by inappropriate market design features. In this dissertation, I examine the performance of three auction mechanisms for wholesale power markets - Offer Cost Minimization auction, Payment Cost Minimization auction and Simple-Offer auction - when electricity suppliers act strategically. A Payment Cost Minimization auction has been proposed as an alternative to the traditional Offer Cost Minimization auction with the intention to solve the problem of inflated wholesale electricity prices. Efficiency concerns for this proposal were voiced due to insights predicated on the assumption of true production cost revelation. Using a game theoretic approach and an experimental method, I compare the two auctions, strictly controlling for the level of unilateral market power. A specific feature of these complex-offer auctions is that the sellers submit not only the quantities and the minimum prices that they are willing to sell at, but also the start-up fees, which are designed to reimburse the fixed start-up costs of the generation plants. I find that the complex structure of the offers leaves considerable room for strategic behavior, which consequently leads to anti-competitive and inefficient market outcomes. In the last chapter of my dissertation, I use laboratory experiments to contrast the performance of two complex-offer auctions against the performance of a simple-offer auction, in which the sellers have to recover all their generation costs - fixed and variable - through a uniform market-clearing price. I find that a simple-offer auction significantly reduces consumer prices and lowers price volatility. It mitigates anti-competitive effects that are present in the complex-offer auctions and achieves allocative efficiency more quickly.

  2. Stochastic optimal control as non-equilibrium statistical mechanics: calculus of variations over density and current

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Chertkov, Michael; Bierkens, Joris; Kappen, Hilbert J.

    2014-01-01

    In stochastic optimal control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.

  3. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  4. Due-Window Assignment Scheduling with Variable Job Processing Times

    PubMed Central

    Wu, Yu-Bin

    2015-01-01

    We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745

  5. Single machine scheduling with slack due dates assignment

    NASA Astrophysics Data System (ADS)

    Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin

    2017-04-01

    This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.

  6. Mixed-Strategy Chance Constrained Optimal Control

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.

    2013-01-01

    This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.

  7. An inverse model for a free-boundary problem with a contact line: Steady case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, Oleg; Protas, Bartosz

    2009-07-20

    This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less

  8. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  9. Cost analysis of adjustments of the epidemiological surveillance system to mass gatherings.

    PubMed

    Zieliński, Andrzej

    2011-01-01

    The article deals with the problem of economical analysis of public health activities at mass gatherings. After presentation of elementary review of basic economical approaches to cost analysis author tries to analyze applicability of those methods to planning of mass gatherings. Difficulties in comparability of different events and lack of the outcome data at the stage of planning make most of the economic approaches unsuitable to application at the planning stage. Even applicability of cost minimization analysis may be limited to comparison of predicted costs of preconceived standards of epidemiological surveillance. Cost effectiveness performed ex post after the event when both costs and obtained effects are known, may bring more information for future selection of most effective procedures.

  10. A comparison of costs associated with utility management options for dry active waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornibrook, C.

    1995-12-31

    The economics of low level waste management is receiving more attention today than ever before. This is due to four factors: (1) the increases in the cost of processing of these wastes; (2) increases in the cost of disposal; (3) the addition of storage costs for those without access to disposal; and (4) the increasing competitive nature of the electric generation industry. These pressures are forcing the industry to update it`s evaluation of the mix of processing that will afford it the best long term economics and minimize it`s risks for unforeseen costs. Whether disposal is available or not, allmore » utilities face the same challenge of minimizing the costs associated with the management of these wastes. There are a number of variables that will impact how a utility manages their wastes but the problem is the uncertainty of what will actually happen, i.e., will disposal be available, when and at what cost. Using the EPRI-developed WASTECOST: DAW code, this paper explores a variety of LLW management options available to utilities. Along with providing the costs and benefits, other technical considerations which play an important part in the management of these wastes are also addressed.« less

  11. Ride comfort control in large flexible aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Warren, M. E.

    1971-01-01

    The problem of ameliorating the discomfort of passengers on a large air transport subject to flight disturbances is examined. The longitudinal dynamics of the aircraft, including effects of body flexing, are developed in terms of linear, constant coefficient differential equations in state variables. A cost functional, penalizing the rigid body displacements and flexure accelerations over the surface of the aircraft is formulated as a quadratic form. The resulting control problem, to minimize the cost subject to the state equation constraints, is of a class whose solutions are well known. The feedback gains for the optimal controller are calculated digitally, and the resulting autopilot is simulated on an analog computer and its performance evaluated.

  12. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  13. Online learning in optical tomography: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Li, Qin; Liu, Jian-Guo

    2018-07-01

    We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.

  14. Periodic Application of Stochastic Cost Optimization Methodology to Achieve Remediation Objectives with Minimized Life Cycle Cost

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.

    2016-12-01

    Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.

  15. Optimal speeds for walking and running, and walking on a moving walkway.

    PubMed

    Srinivasan, Manoj

    2009-06-01

    Many aspects of steady human locomotion are thought to be constrained by a tendency to minimize the expenditure of metabolic cost. This paper has three parts related to the theme of energetic optimality: (1) a brief review of energetic optimality in legged locomotion, (2) an examination of the notion of optimal locomotion speed, and (3) an analysis of walking on moving walkways, such as those found in some airports. First, I describe two possible connotations of the term "optimal locomotion speed:" that which minimizes the total metabolic cost per unit distance and that which minimizes the net cost per unit distance (total minus resting cost). Minimizing the total cost per distance gives the maximum range speed and is a much better predictor of the speeds at which people and horses prefer to walk naturally. Minimizing the net cost per distance is equivalent to minimizing the total daily energy intake given an idealized modern lifestyle that requires one to walk a given distance every day--but it is not a good predictor of animals' walking speeds. Next, I critique the notion that there is no energy-optimal speed for running, making use of some recent experiments and a review of past literature. Finally, I consider the problem of predicting the speeds at which people walk on moving walkways--such as those found in some airports. I present two substantially different theories to make predictions. The first theory, minimizing total energy per distance, predicts that for a range of low walkway speeds, the optimal absolute speed of travel will be greater--but the speed relative to the walkway smaller--than the optimal walking speed on stationary ground. At higher walkway speeds, this theory predicts that the person will stand still. The second theory is based on the assumption that the human optimally reconciles the sensory conflict between the forward speed that the eye sees and the walking speed that the legs feel and tries to equate the best estimate of the forward speed to the naturally preferred speed. This sensory conflict theory also predicts that people would walk slower than usual relative to the walkway yet move faster than usual relative to the ground. These predictions agree qualitatively with available experimental observations, but there are quantitative differences.

  16. Site partitioning for distributed redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1992-01-01

    Distributed redundant disk arrays can be used in a distributed computing system or database system to provide recovery in the presence of temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites into redundant arrays in such way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-complete and we propose two heuristic algorithms for finding approximate solutions.

  17. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  18. Sniffer Channel Selection for Monitoring Wireless LANs

    NASA Astrophysics Data System (ADS)

    Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling

    Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.

  19. Minimize system cost by choosing optimal subsystem reliability and redundancy

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1993-01-01

    The basic question which we address in this paper is how to choose among competing subsystems. This paper utilizes both reliabilities and costs to find the subsystems with the lowest overall expected cost. The paper begins by reviewing some of the concepts of expected value. We then address the problem of choosing among several competing subsystems. These concepts are then applied to k-out-of-n: G subsystems. We illustrate the use of the authors' basic program in viewing a range of possible solutions for several different examples. We then discuss the implications of various solutions in these examples.

  20. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  1. Low-impact recreational practices for wilderness and backcountry

    Treesearch

    David N. Cole

    1989-01-01

    Describes low-impact practices that can contribute to minimizing problems resulting from recreational use of wilderness and backcountry. Each practice is described and information is provided on such subjects as rationale for the practice, importance, and costs to visitors. Practices that may be counter-productive are described, as are important research gaps.

  2. School Security: A Growing Concern

    ERIC Educational Resources Information Center

    Walker, Milton G.

    1976-01-01

    Vandalism, trespassing, drug traffic, crowd control, automobile traffic, and emergencies such as fire or storms--these are the kinds of problems a school security system should be designed to eliminate or minimize. A preventive program can save more money than it costs and can improve the learning environment at the same time, says this writer.…

  3. Study of Naval Air Station Operations to Reduce Fuel Consumption

    DTIC Science & Technology

    2014-06-01

    43 1. viii 2.  Smart Refueling...be attained (Klabjan, Johnson, Nemhouser, Gelman, & Ramaswamy, 2002). Dunbar et al. introduce a slack variable to the problem and change the...objective to minimize total cost associated with propagated delay rather than maximizing profit. The slack between an arrival and subsequent departure is

  4. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  5. Application of Harmony Search algorithm to the solution of groundwater management models

    NASA Astrophysics Data System (ADS)

    Tamer Ayvaz, M.

    2009-06-01

    This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.

  6. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  7. Efficient algorithms for dilated mappings of binary trees

    NASA Technical Reports Server (NTRS)

    Iqbal, M. Ashraf

    1990-01-01

    The problem is addressed to find a 1-1 mapping of the vertices of a binary tree onto those of a target binary tree such that the son of a node on the first binary tree is mapped onto a descendent of the image of that node in the second binary tree. There are two natural measures of the cost of this mapping, namely the dilation cost, i.e., the maximum distance in the target binary tree between the images of vertices that are adjacent in the original tree. The other measure, expansion cost, is defined as the number of extra nodes/edges to be added to the target binary tree in order to ensure a 1-1 mapping. An efficient algorithm to find a mapping of one binary tree onto another is described. It is shown that it is possible to minimize one cost of mapping at the expense of the other. This problem arises when designing pipelined arithmetic logic units (ALU) for special purpose computers. The pipeline is composed of ALU chips connected in the form of a binary tree. The operands to the pipeline can be supplied to the leaf nodes of the binary tree which then process and pass the results up to their parents. The final result is available at the root. As each new application may require a distinct nesting of operations, it is useful to be able to find a good mapping of a new binary tree over existing ALU tree. Another problem arises if every distinct required binary tree is known beforehand. Here it is useful to hardwire the pipeline in the form of a minimal supertree that contains all required binary trees.

  8. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  9. Solving Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) using BRKGA with local search

    NASA Astrophysics Data System (ADS)

    Prasetyo, H.; Alfatsani, M. A.; Fauza, G.

    2018-05-01

    The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.

  10. Randomized shortest-path problems: two related models.

    PubMed

    Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh

    2009-08-01

    This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.

  11. Periodic Heterogeneous Vehicle Routing Problem With Driver Scheduling

    NASA Astrophysics Data System (ADS)

    Mardiana Panggabean, Ellis; Mawengkang, Herman; Azis, Zainal; Filia Sari, Rina

    2018-01-01

    The paper develops a model for the optimal management of logistic delivery of a given commodity. The company has different type of vehicles with different capacity to deliver the commodity for customers. The problem is then called Periodic Heterogeneous Vehicle Routing Problem (PHVRP). The goal is to schedule the deliveries according to feasible combinations of delivery days and to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the sum of the costs of all routes over the planning horizon. We propose a combined approach of heuristic algorithm and exact method to solve the problem.

  12. Evidence-based medicine: is translating evidence into practice a solution to the cost-quality challenges facing medicine?

    PubMed

    Larson, E B

    1999-09-01

    Evidence-based medicine (EBM) and practice guidelines have been embraced by increasing numbers of scholars, administrators, and medical journalists as an intellectually attractive solution to the dilemma of improving health care quality while reducing costs. However, certain factors have thus far limited the role that EBM might play in resolving cost-quality trade-offs. Beyond the quality of the guideline and the evidence base itself, critical factors for success include local clinician involvement, a unified or closed medical staff, protocols that minimize use of clinical judgment and that call for involvement of so-called physician extenders (such as nurse practitioners and physician assistants), and financial incentive. TROUBLESOME ISSUES RELATED TO COST-QUALITY TRADE-OFFS: Rationing presents many dilemmas, but for physicians one critical problem is determining what is the physician's responsibility. Is the physician to be the patient's advocate, or should the physician be the advocate of all patients (the patients' advocate)? How do we get physicians out of potentially conflicted roles? EBM guidelines are needed to help minimize the number of instances physicians are asked to ration care at the bedside. If the public can decide to share and limit resources--presumably based on shared priorities--physicians would have a basis to act as advocates for all patients. Although EBM alone is not a simple solution to the problems of increasing costs and public expectations, it can be an important source of input and information in relating the value of service and medical technology to public priorities.

  13. Design for Warehouse with Product Flow Type Allocation using Linear Programming: A Case Study in a Textile Industry

    NASA Astrophysics Data System (ADS)

    Khannan, M. S. A.; Nafisah, L.; Palupi, D. L.

    2018-03-01

    Sari Warna Co. Ltd, a company engaged in the textile industry, is experiencing problems in the allocation and placement of goods in the warehouse. During this time the company has not implemented the product flow type allocation and product placement to the respective products resulting in a high total material handling cost. Therefore, this study aimed to determine the allocation and placement of goods in the warehouse corresponding to product flow type with minimal total material handling cost. This research is a quantitative research based on the theory of storage and warehouse that uses a mathematical model of optimization problem solving using mathematical optimization model approach belongs to Heragu (2005), aided by software LINGO 11.0 in the calculation of the optimization model. Results obtained from this study is the proportion of the distribution for each functional area is the area of cross-docking at 0.0734, the reserve area at 0.1894, and the forward area at 0.7372. The allocation of product flow type 1 is 5 products, the product flow type 2 is 9 products, the product flow type 3 is 2 products, and the product flow type 4 is 6 products. The optimal total material handling cost by using this mathematical model equal to Rp43.079.510 while it is equal to Rp 49.869.728 by using the company’s existing method. It saves Rp6.790.218 for the total material handling cost. Thus, all of the products can be allocated in accordance with the product flow type with minimal total material handling cost.

  14. Principles for problem aggregation and assignment in medium scale multiprocessors

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.

  15. Alternative fuels

    NASA Technical Reports Server (NTRS)

    Grobman, J. S.; Butze, H. F.; Friedman, R.; Antoine, A. C.; Reynolds, T. W.

    1977-01-01

    Potential problems related to the use of alternative aviation turbine fuels are discussed and both ongoing and required research into these fuels is described. This discussion is limited to aviation turbine fuels composed of liquid hydrocarbons. The advantages and disadvantages of the various solutions to the problems are summarized. The first solution is to continue to develop the necessary technology at the refinery to produce specification jet fuels regardless of the crude source. The second solution is to minimize energy consumption at the refinery and keep fuel costs down by relaxing specifications.

  16. A framework for multi-stakeholder decision-making and ...

    EPA Pesticide Factsheets

    We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study

  17. Tire Force Estimation using a Proportional Integral Observer

    NASA Astrophysics Data System (ADS)

    Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben

    2017-01-01

    This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.

  18. Using genetic algorithm to determine the optimal order quantities for multi-item multi-period under warehouse capacity constraints in kitchenware manufacturing

    NASA Astrophysics Data System (ADS)

    Saraswati, D.; Sari, D. K.; Johan, V.

    2017-11-01

    The study was conducted on a manufacturer that produced various kinds of kitchenware with kitchen sink as the main product. There were four types of steel sheets selected as the raw materials of the kitchen sink. The problem was the manufacturer wanted to determine how much steel sheets to order from a single supplier to meet the production requirements in a way to minimize the total inventory cost. In this case, the economic order quantity (EOQ) model was developed using all-unit discount as the price of steel sheets and the warehouse capacity was limited. Genetic algorithm (GA) was used to find the minimum of the total inventory cost as a sum of purchasing cost, ordering cost, holding cost and penalty cost.

  19. Min-Cut Based Segmentation of Airborne LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.

    2012-07-01

    Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.

  20. Life cycle optimization model for integrated cogeneration and energy systems applications in buildings

    NASA Astrophysics Data System (ADS)

    Osman, Ayat E.

    Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.

  1. Advanced control concepts. [for shuttle ascent vehicles

    NASA Technical Reports Server (NTRS)

    Sharp, J. B.; Coppey, J. M.

    1973-01-01

    The problems of excess control devices and insufficient trim control capability on shuttle ascent vehicles were investigated. The trim problem is solved at all time points of interest using Lagrangian multipliers and a Simplex based iterative algorithm developed as a result of the study. This algorithm has the capability to solve any bounded linear problem with physically realizable constraints, and to minimize any piecewise differentiable cost function. Both solution methods also automatically distribute the command torques to the control devices. It is shown that trim requirements are unrealizable if only the orbiter engines and the aerodynamic surfaces are used.

  2. Shifting orders among suppliers considering risk, price and transportation cost

    NASA Astrophysics Data System (ADS)

    Revitasari, C.; Pujawan, I. N.

    2018-04-01

    Supplier order allocation is an important supply chain decision for an enterprise. It is related to the supplier’s function as a raw material provider and other supporting materials that will be used in production process. Most of works on order allocation has been based on costs and other supply chain performance, but very limited of them taking risks into consideration. In this paper we address the problem of order allocation of a single commodity sourced from multiple suppliers considering supply risks in addition to the attempt of minimizing transportation costs. The supply chain risk was investigated and a procedure was proposed in the risk mitigation phase as a form of risk profile. The objective including risk profile in order allocation is to maximize the product flow from a risky supplier to a relatively less risky supplier. The proposed procedure is applied to a sugar company. The result suggests that order allocations should be maximized to suppliers that have a relatively low risk and minimized to suppliers that have a relatively larger risks.

  3. Text-line extraction in handwritten Chinese documents based on an energy minimization framework.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2012-03-01

    Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.

  4. Determining the Optimal Solution for Quadratically Constrained Quadratic Programming (QCQP) on Energy-Saving Generation Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Chaerani, D.; Khansa, H. N.

    2018-03-01

    Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method

  5. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  6. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  7. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  8. Exact solutions for the collaborative pickup and delivery problem.

    PubMed

    Gansterer, Margaretha; Hartl, Richard F; Salzmann, Philipp E H

    2018-01-01

    In this study we investigate the decision problem of a central authority in pickup and delivery carrier collaborations. Customer requests are to be redistributed among participants, such that the total cost is minimized. We formulate the problem as multi-depot traveling salesman problem with pickups and deliveries. We apply three well-established exact solution approaches and compare their performance in terms of computational time. To avoid unrealistic solutions with unevenly distributed workload, we extend the problem by introducing minimum workload constraints. Our computational results show that, while for the original problem Benders decomposition is the method of choice, for the newly formulated problem this method is clearly dominated by the proposed column generation approach. The obtained results can be used as benchmarks for decentralized mechanisms in collaborative pickup and delivery problems.

  9. The integrated model for solving the single-period deterministic inventory routing problem

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik

    2016-08-01

    This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.

  10. Planning and Conducting a Community Health Screening Fair. NCCSCE Working Paper Series, [Number 2].

    ERIC Educational Resources Information Center

    Berghaus, William C. B.; Graham, Joy

    Each spring, Lord Fairfax Community College (LFCC) organizes and coordinates an Annual Health Screening Fair, a preventive health package designed to help residents identify health-related problems and become more informed about maintaining good health. The community service goals of the fair include the provision of free or minimal-cost health…

  11. Forest road design to minimize erosion in the Southern Appalachians

    Treesearch

    L.W. Swift

    1985-01-01

    Excessive erosion and low serviceability of roads are continuing problems associated with forest management in the mountains of the southeastern United States. Road and erosion research.at Coweeta Hydrologic Laboratory in western North Carolina dates from roadbank stabilization work in the 1930's. Emphasis has been to develop and demonstrate a low-cost, low-...

  12. The Costs of Being a Child Care Teacher: Revisiting the Problem of Low Wages

    ERIC Educational Resources Information Center

    Ackerman, Debra J.

    2006-01-01

    The demand for child care in the United States continues to grow, but child care workers' wages remain minimal. Using examples within New Jersey, the author demonstrates how low wages impact child care quality and are directly related to the effects of the competitive marketplace. Various historical, regulatory, and cultural contexts also…

  13. A hybrid genetic algorithm for solving bi-objective traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Ma, Mei; Li, Hecheng

    2017-08-01

    The traveling salesman problem (TSP) is a typical combinatorial optimization problem, in a traditional TSP only tour distance is taken as a unique objective to be minimized. When more than one optimization objective arises, the problem is known as a multi-objective TSP. In the present paper, a bi-objective traveling salesman problem (BOTSP) is taken into account, where both the distance and the cost are taken as optimization objectives. In order to efficiently solve the problem, a hybrid genetic algorithm is proposed. Firstly, two satisfaction degree indices are provided for each edge by considering the influences of the distance and the cost weight. The first satisfaction degree is used to select edges in a “rough” way, while the second satisfaction degree is executed for a more “refined” choice. Secondly, two satisfaction degrees are also applied to generate new individuals in the iteration process. Finally, based on genetic algorithm framework as well as 2-opt selection strategy, a hybrid genetic algorithm is proposed. The simulation illustrates the efficiency of the proposed algorithm.

  14. A Framework for Multi-Stakeholder Decision-Making and ...

    EPA Pesticide Factsheets

    This contribution describes the implementation of the conditional-value-at-risk (CVaR) metric to create a general multi-stakeholder decision-making framework. It is observed that stakeholder dissatisfactions (distance to their individual ideal solutions) can be interpreted as random variables. We thus shape the dissatisfaction distribution and find an optimal compromise solution by solving a CVaR minimization problem parameterized in the probability level. This enables us to generalize multi-stakeholder settings previously proposed in the literature that minimizes average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework. We demonstrate the framework in a bio-waste processing facility location case study, where we seek compromise solutions (facility locations) that balance stakeholder priorities on transportation, safety, water quality, and capital costs. This conference presentation abstract explains a new decision-making framework that computes compromise solution alternatives (reach consensus) by mitigating dissatisfactions among stakeholders as needed for SHC Decision Science and Support Tools project.

  15. Land transportation model for supply chain manufacturing industries

    NASA Astrophysics Data System (ADS)

    Kurniawan, Fajar

    2017-12-01

    Supply chain is a system that integrates production, inventory, distribution and information processes for increasing productivity and minimize costs. Transportation is an important part of the supply chain system, especially for supporting the material distribution process, work in process products and final products. In fact, Jakarta as the distribution center of manufacturing industries for the industrial area. Transportation system has a large influences on the implementation of supply chain process efficiency. The main problem faced in Jakarta is traffic jam that will affect on the time of distribution. Based on the system dynamic model, there are several scenarios that can provide solutions to minimize timing of distribution that will effect on the cost such as the construction of ports approaching industrial areas other than Tanjung Priok, widening road facilities, development of railways system, and the development of distribution center.

  16. A real-space approach to the X-ray phase problem

    NASA Astrophysics Data System (ADS)

    Liu, Xiangan

    Over the past few decades, the phase problem of X-ray crystallography has been explored in reciprocal space in the so called direct methods . Here we investigate the problem using a real-space approach that bypasses the laborious procedure of frequent Fourier synthesis and peak picking. Starting from a completely random structure, we move the atoms around in real space to minimize a cost function. A Monte Carlo method named simulated annealing (SA) is employed to search the global minimum of the cost function which could be constructed in either real space or reciprocal space. In the hybrid minimal principle, we combine the dual space costs together. One part of the cost function monitors the probability distribution of the phase triplets, while the other is a real space cost function which represents the discrepancy between measured and calculated intensities. Compared to the single space cost functions, the dual space cost function has a greatly improved landscape and therefore could prevent the system from being trapped in metastable states. Thus, the structures of large molecules such as virginiamycin (C43H 49N7O10 · 3CH0OH), isoleucinomycin (C60H102N 6O18) and hexadecaisoleucinomycin (HEXIL) (C80H136 N8O24) can now be solved, whereas it would not be possible using the single cost function. When a molecule gets larger, the configurational space becomes larger, and the requirement of CPU time increases exponentially. The method of improved Monte Carlo sampling has demonstrated its capability to solve large molecular structures. The atoms are encouraged to sample the high density regions in space determined by an approximate density map which in turn is updated and modified by averaging and Fourier synthesis. This type of biased sampling has led to considerable reduction of the configurational space. It greatly improves the algorithm compared to the previous uniform sampling. Hence, for instance, 90% of computer run time could be cut in solving the complex structure of isoleucinomycin. Successful trial calculations include larger molecular structures such as HEXIL and a collagen-like peptide (PPG). Moving chemical fragment is proposed to reduce the degrees of freedom. Furthermore, stereochemical parameters are considered for geometric constraints and for a cost function related to chemical energy.

  17. Exploration on the technology for ozone reduction in urban sewage treatment

    NASA Astrophysics Data System (ADS)

    Yang, Min; Sun, Yi; Han, Zhicheng; Liu, Jun

    2017-05-01

    With the rapid development of China’s economy, urban water consumption is increasing. However, sewage treatment plants will produce large amounts of sludge after treatment of sewage. Generally, and the sludge treatment costs are relatively high. Therefore, the problem about how to deal with the sewage sludge becomes the hot issues. Municipal waste water treatment plant produces a lot of sludge. This paper summarized the abroad study of ozonation minimization technology. Introduction and discussion were made on the principle of ozonated efficiency of sludge minimization, the efficiency of sludge minimization and the relationship between efficiency and ozone dosage, as well the effect of return sludge ozonated on waste water treatment running and the sludge setting and the dewatering characteristic. The economic estimation was also made on this technology. It’s showed that sludge minimization technology exhibits extensive application foreground.

  18. Inventory Control System for a Healthcare Apparel Service Centre with Stockout Risk: A Case Analysis

    PubMed Central

    Hui, Chi-Leung

    2017-01-01

    Based on the real-world inventory control problem of a capacitated healthcare apparel service centre in Hong Kong which provides tailor-made apparel-making services for the elderly and disabled people, this paper studies a partial backordered continuous review inventory control problem in which the product demand follows a Poisson process with a constant lead time. The system is controlled by an (Q,r) inventory policy which incorporate the stockout risk, storage capacity, and partial backlog. The healthcare apparel service centre, under the capacity constraint, aims to minimize the inventory cost and achieving a low stockout risk. To address this challenge, an optimization problem is constructed. A real case-based data analysis is conducted, and the result shows that the expected total cost on an order cycle is reduced substantially at around 20% with our proposed optimal inventory control policy. An extensive sensitivity analysis is conducted to generate additional insights. PMID:29527283

  19. Inventory Control System for a Healthcare Apparel Service Centre with Stockout Risk: A Case Analysis.

    PubMed

    Pan, An; Hui, Chi-Leung

    2017-01-01

    Based on the real-world inventory control problem of a capacitated healthcare apparel service centre in Hong Kong which provides tailor-made apparel-making services for the elderly and disabled people, this paper studies a partial backordered continuous review inventory control problem in which the product demand follows a Poisson process with a constant lead time. The system is controlled by an ( Q , r ) inventory policy which incorporate the stockout risk, storage capacity, and partial backlog. The healthcare apparel service centre, under the capacity constraint, aims to minimize the inventory cost and achieving a low stockout risk. To address this challenge, an optimization problem is constructed. A real case-based data analysis is conducted, and the result shows that the expected total cost on an order cycle is reduced substantially at around 20% with our proposed optimal inventory control policy. An extensive sensitivity analysis is conducted to generate additional insights.

  20. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  1. Integrated Building Energy Systems Design Considering Storage Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadler, Michael; Marnay, Chris; Siddiqui, Afzal

    The addition of storage technologies such as flow batteries, conventional batteries, and heat storage can improve the economic, as well as environmental attraction of micro-generation systems (e.g., PV or fuel cells with or without CHP) and contribute to enhanced demand response. The interactions among PV, solar thermal, and storage systems can be complex, depending on the tariff structure, load profile, etc. In order to examine the impact of storage technologies on demand response and CO2 emissions, a microgrid's distributed energy resources (DER) adoption problem is formulated as a mixed-integer linear program that can pursue two strategies as its objective function.more » These two strategies are minimization of its annual energy costs or of its CO2 emissions. The problem is solved for a given test year at representative customer sites, e.g., nursing homes, to obtain not only the optimal investment portfolio, but also the optimal hourly operating schedules for the selected technologies. This paper focuses on analysis of storage technologies in micro-generation optimization on a building level, with example applications in New York State and California. It shows results from a two-year research projectperformed for the U.S. Department of Energy and ongoing work. Contrary to established expectations, our results indicate that PV and electric storage adoption compete rather than supplement each other considering the tariff structure and costs of electricity supply. The work shows that high electricity tariffs during on-peak hours are a significant driver for the adoption of electric storage technologies. To satisfy the site's objective of minimizing energy costs, the batteries have to be charged by grid power during off-peak hours instead of PV during on-peak hours. In contrast, we also show a CO2 minimization strategy where the common assumption that batteries can be charged by PV can be fulfilled at extraordinarily high energy costs for the site.« less

  2. The marriage problem and the fate of bachelors

    NASA Astrophysics Data System (ADS)

    Nieuwenhuizen, Th. M.

    In the marriage problem, a variant of the bi-parted matching problem, each member has a “wish-list” expressing his/her preference for all possible partners; this list consists of random, positive real numbers drawn from a certain distribution. One searches the lowest cost for the society, at the risk of breaking up pairs in the course of time. Minimization of a global cost function (Hamiltonian) is performed with statistical mechanics techniques at a finite fictitious temperature. The problem is generalized to include bachelors, needed in particular when the groups have different size, and polygamy. Exact solutions are found for the optimal solution ( T=0). The entropy is found to vanish quadratically in T. Also, other evidence is found that the replica symmetric solution is exact, implying at most a polynomial degeneracy of the optimal solution. Whether bachelors occur or not, depends not only on their intrinsic qualities, or lack thereof, but also on global aspects of the chance for pair formation in society.

  3. A novel hybrid genetic algorithm to solve the make-to-order sequence-dependent flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.

    2014-04-01

    Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

  4. Solar space- and water-heating system at Stanford University. Central Food Services Building

    NASA Astrophysics Data System (ADS)

    1980-05-01

    The closed-loop drain-back system is described as offering dependability of gravity drain-back freeze protection, low maintenance, minimal costs, and simplicity. The system features an 840 square-foot collector and storage capacity of 1550 gallons. The acceptance testing and the predicted system performance data are briefly described. Solar performance calculations were performed using a computer design program (FCHART). Bidding, costs, and economics of the system are reviewed. Problems are discussed and solutions and recommendations given. An operation and maintenance manual is given.

  5. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  6. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  7. Smoothing of cost function leads to faster convergence of neural network learning

    NASA Astrophysics Data System (ADS)

    Xu, Li-Qun; Hall, Trevor J.

    1994-03-01

    One of the major problems in supervised learning of neural networks is the inevitable local minima inherent in the cost function f(W,D). This often makes classic gradient-descent-based learning algorithms that calculate the weight updates for each iteration according to (Delta) W(t) equals -(eta) (DOT)$DELwf(W,D) powerless. In this paper we describe a new strategy to solve this problem, which, adaptively, changes the learning rate and manipulates the gradient estimator simultaneously. The idea is to implicitly convert the local- minima-laden cost function f((DOT)) into a sequence of its smoothed versions {f(beta t)}Ttequals1, which, subject to the parameter (beta) t, bears less details at time t equals 1 and gradually more later on, the learning is actually performed on this sequence of functionals. The corresponding smoothed global minima obtained in this way, {Wt}Ttequals1, thus progressively approximate W-the desired global minimum. Experimental results on a nonconvex function minimization problem and a typical neural network learning task are given, analyses and discussions of some important issues are provided.

  8. Simulation Model for Scenario Optimization of the Ready-Mix Concrete Delivery Problem

    NASA Astrophysics Data System (ADS)

    Galić, Mario; Kraus, Ivan

    2016-12-01

    This paper introduces a discrete simulation model for solving routing and network material flow problems in construction projects. Before the description of the model a detailed literature review is provided. The model is verified using a case study of solving the ready-mix concrete network flow and routing problem in metropolitan area in Croatia. Within this study real-time input parameters were taken into account. Simulation model is structured in Enterprise Dynamics simulation software and Microsoft Excel linked with Google Maps. The model is dynamic, easily managed and adjustable, but also provides good estimation for minimization of costs and realization time in solving discrete routing and material network flow problems.

  9. A Modified Artificial Bee Colony Algorithm Application for Economic Environmental Dispatch

    NASA Astrophysics Data System (ADS)

    Tarafdar Hagh, M.; Baghban Orandi, Omid

    2018-03-01

    In conventional fossil-fuel power systems, the economic environmental dispatch (EED) problem is a major problem that optimally determines the output power of generating units in a way that cost of total production and emission level be minimized simultaneously, and at the same time all the constraints of units and system are satisfied properly. To solve EED problem which is a non-convex optimization problem, a modified artificial bee colony (MABC) algorithm is proposed in this paper. This algorithm by implementing weighted sum method is applied on two test systems, and eventually, obtained results are compared with other reported results. Comparison of results confirms superiority and efficiency of proposed method clearly.

  10. Distributed genetic algorithms for the floorplan design problem

    NASA Technical Reports Server (NTRS)

    Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.

    1991-01-01

    Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.

  11. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  12. Federally Sponsored Research at Educational Institutions: A Need for Improved Accountability. Report by the U.S. General Accounting Office.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    This report discusses federally sponsored research at educational institutions and suggests ways to improve accountability for these funds. The following suggestions are made for minimizing problems presented in this report: (1) development of more definitive cost principles for both the institutions and the Federal auditors to follow; (2) more…

  13. General Algebraic Modeling System Tutorial | High-Performance Computing |

    Science.gov Websites

    power generation from two different fuels. The goal is to minimize the cost for one of the fuels while Here's a basic tutorial for modeling optimization problems with the General Algebraic Modeling System (GAMS). Overview The GAMS (General Algebraic Modeling System) package is essentially a compiler for a

  14. Game Theoretic Approaches to Protect Cyberspace

    DTIC Science & Technology

    2010-04-20

    security problems. 3.1 Definitions Game A description of the strategic interaction between opposing, or co-operating, interests where the con ...that involves probabilistic transitions through several states of the system. The game pro - gresses as a sequence of states. The game begins with a...eventually leads to a discretized model. The reaction functions uniquely minimize the strictly con - vex cost functions. After discretization, this

  15. Capacitated arc routing problem and its extensions in waste collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadzli, Mohammad; Najwa, Nurul; Luis, Martino

    2015-05-15

    Capacitated arc routing problem (CARP) is the youngest generation of graph theory that focuses on solving the edge/arc routing for optimality. Since many years, operational research devoted to CARP counterpart, known as vehicle routing problem (VRP), which does not fit to several real cases such like waste collection problem and road maintenance. In this paper, we highlighted several extensions of capacitated arc routing problem (CARP) that represents the real-life problem of vehicle operation in waste collection. By purpose, CARP is designed to find a set of routes for vehicles that satisfies all pre-setting constraints in such that all vehicles mustmore » start and end at a depot, service a set of demands on edges (or arcs) exactly once without exceeding the capacity, thus the total fleet cost is minimized. We also addressed the differentiation between CARP and VRP in waste collection. Several issues have been discussed including stochastic demands and time window problems in order to show the complexity and importance of CARP in the related industry. A mathematical model of CARP and its new version is presented by considering several factors such like delivery cost, lateness penalty and delivery time.« less

  16. A multi-objective approach to solid waste management.

    PubMed

    Galante, Giacomo; Aiello, Giuseppe; Enea, Mario; Panascia, Enrico

    2010-01-01

    The issue addressed in this paper consists in the localization and dimensioning of transfer stations, which constitute a necessary intermediate level in the logistic chain of the solid waste stream, from municipalities to the incinerator. Contextually, the determination of the number and type of vehicles involved is carried out in an integrated optimization approach. The model considers both initial investment and operative costs related to transportation and transfer stations. Two conflicting objectives are evaluated, the minimization of total cost and the minimization of environmental impact, measured by pollution. The design of the integrated waste management system is hence approached in a multi-objective optimization framework. To determine the best means of compromise, goal programming, weighted sum and fuzzy multi-objective techniques have been employed. The proposed analysis highlights how different attitudes of the decision maker towards the logic and structure of the problem result in the employment of different methodologies and the obtaining of different results. The novel aspect of the paper lies in the proposal of an effective decision support system for operative waste management, rather than a further contribution to the transportation problem. The model was applied to the waste management of optimal territorial ambit (OTA) of Palermo (Italy). 2010 Elsevier Ltd. All rights reserved.

  17. A multi-objective approach to solid waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galante, Giacomo, E-mail: galante@dtpm.unipa.i; Aiello, Giuseppe; Enea, Mario

    2010-08-15

    The issue addressed in this paper consists in the localization and dimensioning of transfer stations, which constitute a necessary intermediate level in the logistic chain of the solid waste stream, from municipalities to the incinerator. Contextually, the determination of the number and type of vehicles involved is carried out in an integrated optimization approach. The model considers both initial investment and operative costs related to transportation and transfer stations. Two conflicting objectives are evaluated, the minimization of total cost and the minimization of environmental impact, measured by pollution. The design of the integrated waste management system is hence approached inmore » a multi-objective optimization framework. To determine the best means of compromise, goal programming, weighted sum and fuzzy multi-objective techniques have been employed. The proposed analysis highlights how different attitudes of the decision maker towards the logic and structure of the problem result in the employment of different methodologies and the obtaining of different results. The novel aspect of the paper lies in the proposal of an effective decision support system for operative waste management, rather than a further contribution to the transportation problem. The model was applied to the waste management of optimal territorial ambit (OTA) of Palermo (Italy).« less

  18. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  19. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  20. Dynamic cellular manufacturing system considering machine failure and workload balance

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad

    2018-02-01

    Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.

  1. Enhanced ant colony optimization for inventory routing problem

    NASA Astrophysics Data System (ADS)

    Wong, Lily; Moin, Noor Hasnah

    2015-10-01

    The inventory routing problem (IRP) integrates and coordinates two important components of supply chain management which are transportation and inventory management. We consider a one-to-many IRP network for a finite planning horizon. The demand for each product is deterministic and time varying as well as a fleet of capacitated homogeneous vehicles, housed at a depot/warehouse, delivers the products from the warehouse to meet the demand specified by the customers in each period. The inventory holding cost is product specific and is incurred at the customer sites. The objective is to determine the amount of inventory and to construct a delivery routing that minimizes both the total transportation and inventory holding cost while ensuring each customer's demand is met over the planning horizon. The problem is formulated as a mixed integer programming problem and is solved using CPLEX 12.4 to get the lower and upper bound (best integer) for each instance considered. We propose an enhanced ant colony optimization (ACO) to solve the problem and the built route is improved by using local search. The computational experiments demonstrating the effectiveness of our approach is presented.

  2. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  3. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  4. Fuzzy Multi-Objective Transportation Planning with Modified S-Curve Membership Function

    NASA Astrophysics Data System (ADS)

    Peidro, D.; Vasant, P.

    2009-08-01

    In this paper, the S-Curve membership function methodology is used in a transportation planning decision (TPD) problem. An interactive method for solving multi-objective TPD problems with fuzzy goals, available supply and forecast demand is developed. The proposed method attempts simultaneously to minimize the total production and transportation costs and the total delivery time with reference to budget constraints and available supply, machine capacities at each source, as well as forecast demand and warehouse space constraints at each destination. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in TPD problems, with linear membership functions.

  5. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  6. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  7. Overuse of helicopter transport in the minimally injured: A health care system problem that should be corrected.

    PubMed

    Vercruysse, Gary A; Friese, Randall S; Khalil, Mazhar; Ibrahim-Zada, Irada; Zangbar, Bardiya; Hashmi, Ammar; Tang, Andrew; O'Keeffe, Terrence; Kulvatunyou, Narong; Green, Donald J; Gries, Lynn; Joseph, Bellal; Rhee, Peter M

    2015-03-01

    Mortality benefit has been demonstrated for trauma patients transported via helicopter but at great cost. This study identified patients who did not benefit from helicopter transport to our facility and demonstrates potential cost savings when transported instead by ground. We performed a 6-year (2007-2013) retrospective analysis of all trauma patients presenting to our center. Patients with a known mode of transfer were included in the study. Patients with missing data and those who were dead on arrival were excluded from the study. Patients were then dichotomized into helicopter transfer and ground transfer groups. A subanalysis was performed between minimally injured patients (ISS < 5) in both the groups after propensity score matching for demographics, injury severity parameters, and admission vital parameters. Groups were then compared for hospital and emergency department length of stay, early discharge, and mortality. Of 5,202 transferred patients, 18.9% (981) were transferred via helicopter and 76.7% (3,992) were transferred via ground transport. Helicopter-transferred patients had longer hospital (p = 0.001) and intensive care unit (p = 0.001) stays. There was no difference in mortality between the groups (p = 0.6).On subanalysis of minimally injured patients there was no difference in hospital length of stay (p = 0.1) and early discharge (p = 0.6) between the helicopter transfer and ground transfer group. Average helicopter transfer cost at our center was $18,000, totaling $4,860,000 for 270 minimally injured helicopter-transferred patients. Nearly one third of patients transported by helicopter were minimally injured. Policies to identify patients who do not benefit from helicopter transport should be developed. Significant reduction in transport cost can be made by judicious selection of patients. Education to physicians calling for transport and identification of alternate means of transportation would be both safe and financially beneficial to our system. Epidemiologic study, level III. Therapeutic study, level IV.

  8. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  9. The aerodynamic challenges of SRB recovery

    NASA Technical Reports Server (NTRS)

    Bacchus, D. L.; Kross, D. A.; Moog, R. D.

    1985-01-01

    Recovery and reuse of the Space Shuttle solid rocket boosters was baselined to support the primary goal to develop a low cost space transportation system. The recovery system required for the 170,000-lb boosters was for the largest and heaviest object yet to be retrieved from exoatmospheric conditions. State-of-the-art design procedures were ground-ruled and development testing minimized to produce both a reliable and cost effective system. The ability to utilize the inherent drag of the boosters during the initial phase of reentry was a key factor in minimizing the parachute loads, size and weight. A wind tunnel test program was devised to enable the accurate prediction of booster aerodynamic characteristics. Concurrently, wind tunnel, rocket sled and air drop tests were performed to develop and verify the performance of the parachute decelerator subsystem. Aerodynamic problems encountered during the overall recovery system development and the respective solutions are emphasized.

  10. An integer batch scheduling model considering learning, forgetting, and deterioration effects for a single machine to minimize total inventory holding cost

    NASA Astrophysics Data System (ADS)

    Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.

    2018-03-01

    This research deals with a single machine batch scheduling model considering the influenced of learning, forgetting, and machine deterioration effects. The objective of the model is to minimize total inventory holding cost, and the decision variables are the number of batches (N), batch sizes (Q[i], i = 1, 2, .., N) and the sequence of processing the resulting batches. The parts to be processed are received at the right time and the right quantities, and all completed parts must be delivered at a common due date. We propose a heuristic procedure based on the Lagrange method to solve the problem. The effectiveness of the procedure is evaluated by comparing the resulting solution to the optimal solution obtained from the enumeration procedure using the integer composition technique and shows that the average effectiveness is 94%.

  11. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  12. Jerk Minimization Method for Vibration Control in Buildings

    NASA Technical Reports Server (NTRS)

    Abatan, Ayo O.; Yao, Leummim

    1997-01-01

    In many vibration minimization control problems for high rise buildings subject to strong earthquake loads, the emphasis has been on a combination of minimizing the displacement, the velocity and the acceleration of the motion of the building. In most cases, the accelerations that are involved are not necessarily large but the change in them (jerk) are abrupt. These changes in magnitude or direction are responsible for most building damage and also create discomfort like motion sickness for inhabitants of these structures because of the element of surprise. We propose a method of minimizing also the jerk which is the sudden change in acceleration or the derivative of the acceleration using classical linear quadratic optimal controls. This was done through the introduction of a quadratic performance index involving the cost due to the jerk; a special change of variable; and using the jerk as a control variable. The values of the optimal control are obtained using the Riccati equation.

  13. Reducing fatigue damage for ships in transit through structured decision making

    USGS Publications Warehouse

    Nichols, J.M.; Fackler, P.L.; Pacifici, K.; Murphy, K.D.; Nichols, J.D.

    2014-01-01

    Research in structural monitoring has focused primarily on drawing inference about the health of a structure from the structure’s response to ambient or applied excitation. Knowledge of the current state can then be used to predict structural integrity at a future time and, in principle, allows one to take action to improve safety, minimize ownership costs, and/or increase the operating envelope. While much time and effort has been devoted toward data collection and system identification, research to-date has largely avoided the question of how to choose an optimal maintenance plan. This work describes a structured decision making (SDM) process for taking available information (loading data, model output, etc.) and producing a plan of action for maintaining the structure. SDM allows the practitioner to specify his/her objectives and then solves for the decision that is optimal in the sense that it maximizes those objectives. To demonstrate, we consider the problem of a Naval vessel transiting a fixed distance in varying sea-state conditions. The physics of this problem are such that minimizing transit time increases the probability of fatigue failure in the structural supports. It is shown how SDM produces the optimal trip plan in the sense that it minimizes both transit time and probability of failure in the manner of our choosing (i.e., through a user-defined cost function). The example illustrates the benefit of SDM over heuristic approaches to maintaining the vessel.

  14. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  15. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  16. A novel heuristic for optimization aggregate production problem: Evidence from flat panel display in Malaysia

    NASA Astrophysics Data System (ADS)

    Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.

    2015-05-01

    Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.

  17. A study on macroeconomic cost of CCS in Korea

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Whan; Kim, Yoon Kyung

    2015-04-01

    CCS is an important measure for mitigating the problem of World Climate Change and already several projects are entered the step of commercialization. The benefits of CCS implementation ultimately depends on the alleviation level of CO2 on earth because it is caused by the mitigation of the World Climate Change problem. Thus it is possible not to coincide at same time between starting the CCS and getting the benefits. Considering the high costs of CCS, the time mismatch between imposing the costs and getting the benefits is apt to impose some heavy burden on the individual national economy. For this reason, at the political decision-making, the policy makers should consider the macroeconomic effects. Meanwhile, Korean electricity market's supply side is comprised of competitive production and a sole distributor(public enterprise) and then electricity is supplied by a single price structure(administered pricing). Under this condition, if CCS is introduced to power setor, electric charges must be increased and production costs will go high. High production costs will have unfavourable effects on disposable income, price level, purchasing power and so on. In order to minimize these effects, policy makers have to consider the economic effects of introducing CCS. This study estimates the microscopic cost of CCS using ICCSEM 2.0 methodology made by CO2CRC and after that, the macroeconomic effects of introducing CCS is estimated on the basis of microscopic cost estimating results. The macroeconomic effects of CCS applied to Power Generation sector are estimated using macroeconometrics model and Input-Output analysis. A macroeconometrics model is an analytical tool designed to describe the operation of the national economy. This model is usually applied to examine the dynamics of aggregate quantities such as the total amount of goods and services produced, total income earned, the level of employment of productive resources, the level of prices and so forth. Introducing the input-output relationship of Korean industries, the macroeconometrics model can show what response is caused by the CCS cost as supply and demand shock. This study is intended to provide a basic information for making reasonable policies which is to minimize the economic costs of introducing CCS.

  18. Optimal control of vancomycin-resistant enterococci using preventive care and treatment of infections.

    PubMed

    Lowden, Jonathan; Miller Neilan, Rachael; Yahdi, Mohammed

    2014-03-01

    The rising prevalence of vancomycin-resistant enterococci (VRE) is a major health problem in intensive care units (ICU) because of its association with increased mortality and high health care costs. We present a mathematical framework for determining cost-effective strategies for prevention and treatment of VRE in the ICU. A system of five ordinary differential equations describes the movement of ICU patients in and out of five VRE-related states. Two control variables representing the prevention and treatment of VRE are incorporated into the system. The basic reproductive number is derived and calculated for different levels of the two controls. An optimal control problem is formulated to minimize VRE-related deaths and costs associated with prevention and treatment controls over a finite time period. Numerical solutions illustrate optimal single and dual allocations of the controls for various cost values. Results show that preventive care has the greatest impact in reducing the basic reproductive number, while treatment of VRE infections has the most impact on reducing VRE-related deaths. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Measuring and minimizing the social cost of environmental pollution

    NASA Technical Reports Server (NTRS)

    Henry, H. W.

    1973-01-01

    The various impacts to the environmental protection movement on the largest corporations in several industries which had the most serious pollution problem are discussed. The purpose was to examine the impacts from the point of view of top corporation managers so that a broader perspective could be provided for all concerned parties- citizens, environmentalists, legislators, governmental administrators and agency personnel, scientists, engineers, and other industrial managers.

  20. Experiments to Generate New Data about School Choice: Commentary on "Defining Continuous Improvement and Cost Minimization Possibilities through School Choice Experiments" and Merrifield's Reply

    ERIC Educational Resources Information Center

    Berg, Nathan; Merrifield, John

    2009-01-01

    Benefiting from new data provided by experimental economists, behavioral economics is now moving beyond empirical tests of standard behavioral assumptions to the problem of designing improved institutions that are tuned to fit real-world behavior. It is therefore worthwhile to consider the potential for new experiments to advance school choice…

  1. Experience reveals ways to minimize failures in rod-pumped wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patterson, J.C.; Bucaram, S.M.; Curfew, J.V.

    From the experience gained over the past 25 years, ARCO Oil and Gas Co. has developed recommendations to reduce equipment failure in sucker-rod pumping installations. These recommendations include equipment selection and design, operating procedures, and chemical treatment. Equipment failure and its attendant costs are extremely important in today's petroleum industry. Because rod pumping is the predominant means of artificial lift, minimizing equipment failure in rod pumped wells can have a significant impact on profitability. This compilation of recommendations comes from field locations throughout the US and other countries. The goal is to address and solve problems on a well-by-well basis.

  2. Bio-inspired secure data mules for medical sensor network

    NASA Astrophysics Data System (ADS)

    Muraleedharan, Rajani; Gao, Weihua; Osadciw, Lisa A.

    2010-04-01

    Medical sensor network consist of heterogeneous nodes, wireless, mobile and wired with varied functionality. The resources at each sensor require to be exploited minimally while sensitive information is sensed and communicated to its access points using secure data mules. In this paper, we analyze the flat architecture, where different functionality and priority information require varied resources forms a non-deterministic polynomial-time hard problem. Hence, a bio-inspired data mule that helps to obtain dynamic multi-objective solution with minimal resource and secure path is applied. The performance of the proposed approach is based on reduced latency, data delivery rate and resource cost.

  3. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  4. Method for protein structure alignment

    DOEpatents

    Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus

    2005-02-22

    This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.

  5. Solving a Class of Spatial Reasoning Problems: Minimal-Cost Path Planning in the Cartesian Plane.

    DTIC Science & Technology

    1987-06-01

    as in Figure 72. By the Theorem of Pythagoras : Z1 <a z 2 < C Yl(bl+b 2)uI, the cost of going along (a,b,c) is greater that the...preceding lemmas to an indefinite number of boundary-crossing episodes is accomplished by the following theorems . Theorem 1 extends the result of Lemma 1... Theorem 1: Any two Snell’s-law paths within a K-explored wedge defined by Snell’s-law paths RL and R. do not intersect within the K-explored portion of

  6. Economic environmental dispatch using BSA algorithm

    NASA Astrophysics Data System (ADS)

    Jihane, Kartite; Mohamed, Cherkaoui

    2018-05-01

    Economic environmental dispatch problem (EED) is an important issue especially in the field of fossil fuel power plant system. It allows the network manager to choose among different units the most optimized in terms of fuel costs and emission level. The objective of this paper is to minimize the fuel cost with emissions constrained; the test is conducted for two cases: six generator unit and ten generator unit for the same power demand 1200Mw. The simulation has been computed in MATLAB and the result shows the robustness of the Backtracking Search optimization Algorithm (BSA) and the impact of the load demand on the emission.

  7. Modelling DC responses of 3D complex fracture networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  8. Modelling DC responses of 3D complex fracture networks

    DOE PAGES

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    2018-03-01

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  9. A case study of cost-efficient staffing under annualized hours.

    PubMed

    van der Veen, Egbert; Hans, Erwin W; Veltman, Bart; Berrevoets, Leo M; Berden, Hubert J J M

    2015-09-01

    We propose a mathematical programming formulation that incorporates annualized hours and shows to be very flexible with regard to modeling various contract types. The objective of our model is to minimize salary cost, thereby covering workforce demand, and using annualized hours. Our model is able to address various business questions regarding tactical workforce planning problems, e.g., with regard to annualized hours, subcontracting, and vacation planning. In a case study for a Dutch hospital two of these business questions are addressed, and we demonstrate that applying annualized hours potentially saves up to 5.2% in personnel wages annually.

  10. Minimization of annotation work: diagnosis of mammographic masses via active learning

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-06-01

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  11. Minimization of annotation work: diagnosis of mammographic masses via active learning.

    PubMed

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-05-22

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  12. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  13. Eighth-order explicit two-step hybrid methods with symmetric nodes and weights for solving orbital and oscillatory IVPs

    NASA Astrophysics Data System (ADS)

    Franco, J. M.; Rández, L.

    The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.

  14. Dynamics of the line-start reluctance motor with rotor made of SMC material

    NASA Astrophysics Data System (ADS)

    Smółka, Krzysztof; Gmyrek, Zbigniew

    2017-12-01

    Design and control of electric motors in such a way as to ensure the expected motor dynamics, are the problems studied for many years. Many researchers tried to solve this problem, for example by the design optimization or by the use of special control algorithms in electronic systems. In the case of low-power and fractional power motors, the manufacture cost of the final product is many times less than cost of electronic system powering them. The authors of this paper attempt to improve the dynamic of 120 W line-start synchronous reluctance motor, energized by 50 Hz mains (without any electronic systems). The authors seek a road enabling improvement of dynamics of the analyzed motor, by changing the shape and material of the rotor, in such a way to minimize the modification cost of the tools necessary for the motor production. After the initial selection, the analysis of four rotors having different tooth shapes, was conducted.

  15. Time Dependent Heterogeneous Vehicle Routing Problem for Catering Service Delivery Problem

    NASA Astrophysics Data System (ADS)

    Azis, Zainal; Mawengkang, Herman

    2017-09-01

    The heterogeneous vehicle routing problem (HVRP) is a variant of vehicle routing problem (VRP) which describes various types of vehicles with different capacity to serve a set of customers with known geographical locations. This paper considers the optimal service deliveries of meals of a catering company located in Medan City, Indonesia. Due to the road condition as well as traffic, it is necessary for the company to use different type of vehicle to fulfill customers demand in time. The HVRP incorporates time dependency of travel times on the particular time of the day. The objective is to minimize the sum of the costs of travelling and elapsed time over the planning horizon. The problem can be modeled as a linear mixed integer program and we address a feasible neighbourhood search approach to solve the problem.

  16. Determination of optimum allocation and pricing of distributed generation using genetic algorithm methodology

    NASA Astrophysics Data System (ADS)

    Mwakabuta, Ndaga Stanslaus

    Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.

  17. Sociodemographic disparities associated with perceived causes of unmet need for mental health care.

    PubMed

    Alang, Sirry M

    2015-12-01

    Mental disorders are among the leading causes of disability in the United States. In 2011, over 10 million adults felt that even though they needed treatment for mental health problems, they received insufficient or no mental health care and reported unmet need. This article assesses associations between sociodemographic characteristics and perceived causes of unmet needs for mental health care. A sample of 2,564 adults with unmet mental health need was obtained from the National Survey on Drug Use and Health. Outcome variables were 5 main reasons for unmet need: cost, stigma, minimization, low perceived treatment effectiveness, and structural barriers. Each cause of unmet need was regressed on sociodemographic, health, and service use characteristics. Women had higher odds of cost-related reasons for unmet need than men. Odds of stigma and structural barriers were greater among Blacks than Whites, and among rural than metropolitan residents. Compared with the uninsured, insured persons were less likely to report cost barriers. However, insured persons had higher odds of stigma and minimization of mental disorders. Insurance alone is unlikely to resolve the problem of unmet need. Understanding the social epidemiology of perceived unmet need will help identify populations at risk of not receiving mental health care or insufficient care. Focusing on specific programs and services that are designed to address the causes of perceived unmet need in particular populations is important. Future research should explore how intersecting social statuses affect the likelihood of perceived unmet need. (c) 2015 APA, all rights reserved).

  18. A deterministic aggregate production planning model considering quality of products

    NASA Astrophysics Data System (ADS)

    Madadi, Najmeh; Yew Wong, Kuan

    2013-06-01

    Aggregate Production Planning (APP) is a medium-term planning which is concerned with the lowest-cost method of production planning to meet customers' requirements and to satisfy fluctuating demand over a planning time horizon. APP problem has been studied widely since it was introduced and formulated in 1950s. However, in several conducted studies in the APP area, most of the researchers have concentrated on some common objectives such as minimization of cost, fluctuation in the number of workers, and inventory level. Specifically, maintaining quality at the desirable level as an objective while minimizing cost has not been considered in previous studies. In this study, an attempt has been made to develop a multi-objective mixed integer linear programming model that serves those companies aiming to incur the minimum level of operational cost while maintaining quality at an acceptable level. In order to obtain the solution to the multi-objective model, the Fuzzy Goal Programming approach and max-min operator of Bellman-Zadeh were applied to the model. At the final step, IBM ILOG CPLEX Optimization Studio software was used to obtain the experimental results based on the data collected from an automotive parts manufacturing company. The results show that incorporating quality in the model imposes some costs, however a trade-off should be done between the cost resulting from producing products with higher quality and the cost that the firm may incur due to customer dissatisfaction and sale losses.

  19. Scheduling and control strategies for the departure problem in air traffic control

    NASA Astrophysics Data System (ADS)

    Bolender, Michael Alan

    Two problems relating to the departure problem in air traffic control automation are examined. The first problem that is addressed is the scheduling of aircraft for departure. The departure operations at a major US hub airport are analyzed, and a discrete event simulation of the departure operations is constructed. Specifically, the case where there is a single departure runway is considered. The runway is fed by two queues of aircraft. Each queue, in turn, is fed by a single taxiway. Two salient areas regarding scheduling are addressed. The first is the construction of optimal departure sequences for the aircraft that are queued. Several greedy search algorithms are designed to minimize the total time to depart a set of queued aircraft. Each algorithm has a different set of heuristic rules to resolve situations within the search space whenever two branches of the search tree with equal edge costs are encountered. These algorithms are then compared and contrasted with a genetic search algorithm in order to assess the performance of the heuristics. This is done in the context of a static departure problem where the length of the departure queue is fixed. A greedy algorithm which deepens the search whenever two branches of the search tree with non-unique costs are encountered is shown to outperform the other heuristic algorithms. This search strategy is then implemented in the discrete event simulation. A baseline performance level is established, and a sensitivity analysis is performed by implementing changes in traffic mix, routing, and miles-in-trail restrictions for comparison. It is concluded that to minimize the average time spent in the queue for different traffic conditions, a queue assignment algorithm is needed to maintain an even balance of aircraft in the queues. A necessary consideration is to base queue assignment upon traffic management restrictions such as miles-in-trail constraints. The second problem addresses the technical challenges associated with merging departure aircraft onto their filed routes in a congested airspace environment. Conflicts between departures and en route aircraft within the Center airspace are analyzed. Speed control, holding the aircraft; at an intermediate altitude, re-routing, and vectoring are posed as possible deconfliction maneuvers. A cost assessment of these merge strategies, which are based upon 4D fight management and conflict detection and resolution principles, is given. Several merge conflicts are studied and a cost for each resolution is computed. It is shown that vectoring tends to be the most expensive resolution technique. Altitude hold is simple, costs less than vectoring, but may require a long time for the aircraft to achieve separation. Re-routing is the simplest, and provides the most cost benefit since the aircraft flies a shorter distance than if it had followed its filed route. Speed control is shown to be ineffective as a means of increasing separation, but is effective for maintaining separation between aircraft. In addition, the affects of uncertainties on the cost are assessed. The analysis shows that cost is invariant with the decision time.

  20. Order Batching in Warehouses by Minimizing Total Tardiness: A Hybrid Approach of Weighted Association Rule Mining and Genetic Algorithms

    PubMed Central

    Taheri, Shahrooz; Mat Saman, Muhamad Zameri; Wong, Kuan Yew

    2013-01-01

    One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach. PMID:23864823

  1. Order batching in warehouses by minimizing total tardiness: a hybrid approach of weighted association rule mining and genetic algorithms.

    PubMed

    Azadnia, Amir Hossein; Taheri, Shahrooz; Ghadimi, Pezhman; Saman, Muhamad Zameri Mat; Wong, Kuan Yew

    2013-01-01

    One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.

  2. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  3. A non-linear optimization programming model for air quality planning including co-benefits for GHG emissions.

    PubMed

    Turrini, Enrico; Carnevale, Claudio; Finzi, Giovanna; Volta, Marialuisa

    2018-04-15

    This paper introduces the MAQ (Multi-dimensional Air Quality) model aimed at defining cost-effective air quality plans at different scales (urban to national) and assessing the co-benefits for GHG emissions. The model implements and solves a non-linear multi-objective, multi-pollutant decision problem where the decision variables are the application levels of emission abatement measures allowing the reduction of energy consumption, end-of pipe technologies and fuel switch options. The objectives of the decision problem are the minimization of tropospheric secondary pollution exposure and of internal costs. The model assesses CO 2 equivalent emissions in order to support decision makers in the selection of win-win policies. The methodology is tested on Lombardy region, a heavily polluted area in northern Italy. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Cost-effectiveness analysis in minimally invasive spine surgery.

    PubMed

    Al-Khouja, Lutfi T; Baron, Eli M; Johnson, J Patrick; Kim, Terrence T; Drazin, Doniel

    2014-06-01

    Medical care has been evolving with the increased influence of a value-based health care system. As a result, more emphasis is being placed on ensuring cost-effectiveness and utility in the services provided to patients. This study looks at this development in respect to minimally invasive spine surgery (MISS) costs. A literature review using PubMed, the Cost-Effectiveness Analysis (CEA) Registry, and the National Health Service Economic Evaluation Database (NHS EED) was performed. Papers were included in the study if they reported costs associated with minimally invasive spine surgery (MISS). If there was no mention of cost, CEA, cost-utility analysis (CUA), quality-adjusted life year (QALY), quality, or outcomes mentioned, then the article was excluded. Fourteen studies reporting costs associated with MISS in 12,425 patients (3675 undergoing minimally invasive procedures and 8750 undergoing open procedures) were identified through PubMed, the CEA Registry, and NHS EED. The percent cost difference between minimally invasive and open approaches ranged from 2.54% to 33.68%-all indicating cost saving with a minimally invasive surgical approach. Average length of stay (LOS) for minimally invasive surgery ranged from 0.93 days to 5.1 days compared with 1.53 days to 12 days for an open approach. All studies reporting EBL reported lower volume loss in an MISS approach (range 10-392.5 ml) than in an open approach (range 55-535.5 ml). There are currently an insufficient number of studies published reporting the costs of MISS. Of the studies published, none have followed a standardized method of reporting and analyzing cost data. Preliminary findings analyzing the 14 studies showed both cost saving and better outcomes in MISS compared with an open approach. However, more Level I CEA/CUA studies including cost/QALY evaluations with specifics of the techniques utilized need to be reported in a standardized manner to make more accurate conclusions on the cost effectiveness of minimally invasive spine surgery.

  5. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    PubMed Central

    Fu, Yi-Ge; Zhou, Jie; Deng, Lei

    2014-01-01

    As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353

  6. Three essays on pricing and risk management in electricity markets

    NASA Astrophysics Data System (ADS)

    Kotsan, Serhiy

    2005-07-01

    A set of three papers forms this dissertation. In the first paper I analyze an electricity market that does not clear. The system operator satisfies fixed demand at a fixed price, and attempts to minimize "cost" as indicated by independent generators' supply bids. No equilibrium exists in this situation, and the operator lacks information sufficient to minimize actual cost. As a remedy, we propose a simple efficient tax mechanism. With the tax, Nash equilibrium bids still diverge from marginal cost but nonetheless provide sufficient information to minimize actual cost, regardless of the tax rate or number of generators. The second paper examines a price mechanism with one price assigned for each level of bundled real and reactive power. Equilibrium allocation under this pricing approach raises system efficiency via better allocation of the reactive power reserves, neglected in the traditional pricing approach. Pricing reactive power should be considered in the bundle with real power since its cost is highly dependent on real power output. The efficiency of pricing approach is shown in the general case, and tested on the 30-bus IEEE network with piecewise linear cost functions of the generators. Finally the third paper addresses the problem of optimal investment in generation based on mean-variance portfolio analysis. It is assumed the investor can freely create a portfolio of shares in generation located on buses of the electrical network. Investors are risk averse, and seek to minimize the variance of the weighted average Locational Marginal Price (LMP) in their portfolio, and to maximize its expected value. I conduct simulations using a standard IEEE 68-bus network that resembles the New York - New England system and calculate LMPs in accordance with the PJM methodology for a fully optimal AC power flow solution. Results indicate that the network topology is a crucial determinant of the investment decision as line congestion makes it difficult to deliver power to certain nodes at system peak load. Determining those nodes is an important task for an investor in generation as well as the transmission system operator.

  7. Kalman filters for fractional discrete-time stochastic systems along with time-delay in the observation signal

    NASA Astrophysics Data System (ADS)

    Torabi, H.; Pariz, N.; Karimpour, A.

    2016-02-01

    This paper investigates fractional Kalman filters when time-delay is entered in the observation signal in the discrete-time stochastic fractional order state-space representation. After investigating the common fractional Kalman filter, we try to derive a fractional Kalman filter for time-delay fractional systems. A detailed derivation is given. Fractional Kalman filters will be used to estimate recursively the states of fractional order state-space systems based on minimizing the cost function when there is a constant time delay (d) in the observation signal. The problem will be solved by converting the filtering problem to a usual d-step prediction problem for delay-free fractional systems.

  8. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  9. Optimization of municipal solid waste collection and transportation routes.

    PubMed

    Das, Swapan; Bhattacharyya, Bidyut Kr

    2015-09-01

    Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scatter throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. An effective hybrid self-adapting differential evolution algorithm for the joint replenishment and location-inventory problem in a three-level supply chain.

    PubMed

    Wang, Lin; Qu, Hui; Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem.

  11. An Effective Hybrid Self-Adapting Differential Evolution Algorithm for the Joint Replenishment and Location-Inventory Problem in a Three-Level Supply Chain

    PubMed Central

    Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem. PMID:24453822

  12. Synthesizing optimal waste blends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, V.; Diwekar, W.M.; Hoza, M.

    Vitrification of tank wastes to form glass is a technique that will be used for the disposal of high-level waste at Hanford. Process and storage economics show that minimizing the total number of glass logs produced is the key to keeping cost as low as possible. The amount of glass produced can be reduced by blending of the wastes. The optimal way to combine the tanks to minimize the vole of glass can be determined from a discrete blend calculation. However, this problem results in a combinatorial explosion as the number of tanks increases. Moreover, the property constraints make thismore » problem highly nonconvex where many algorithms get trapped in local minima. In this paper the authors examine the use of different combinatorial optimization approaches to solve this problem. A two-stage approach using a combination of simulated annealing and nonlinear programming (NLP) is developed. The results of different methods such as the heuristics approach based on human knowledge and judgment, the mixed integer nonlinear programming (MINLP) approach with GAMS, and branch and bound with lower bound derived from the structure of the given blending problem are compared with this coupled simulated annealing and NLP approach.« less

  13. Integration of prebend optimization in a holistic wind turbine design tool

    NASA Astrophysics Data System (ADS)

    Sartori, L.; Bortolotti, P.; Croce, A.; Bottasso, C. L.

    2016-09-01

    This paper considers the problem of identifying the optimal combination of blade prebend, rotor cone angle and nacelle uptilt, within an integrated aero-structural design environment. Prebend is designed to reach maximum rotor area at rated conditions, while cone and uptilt are computed together with all other design variables to minimize the cost of energy. Constraints are added to the problem formulation in order to translate various design requirements. The proposed optimization approach is applied to a conceptual 10 MW offshore wind turbine, highlighting the benefits of an optimal combination of blade curvature, cone and uptilt angles.

  14. Integer Optimization Model for a Logistic System based on Location-Routing Considering Distance and Chosen Route

    NASA Astrophysics Data System (ADS)

    Mulyasari, Joni; Mawengkang, Herman; Efendi, Syahril

    2018-02-01

    In a distribution network it is important to decide the locations of facilities that impacts not only the profitability of an organization but the ability to serve customers.Generally the location-routing problem is to minimize the overall cost by simultaneously selecting a subset of candidate facilities and constructing a set of delivery routes that satisfy some restrictions. In this paper we impose restriction on the route that should be passed for delivery. We use integer programming model to describe the problem. A feasible neighbourhood search is proposed to solve the result model.

  15. Practical problems which women encounter with available contraception in Australia.

    PubMed

    Weisberg, E

    1994-06-01

    Australian women face major difficulties with contraception because of the limited range of choices, the need for meticulous attention to compliance with most available methods and because of cost limitations for a significant minority of the population. The most commonly used methods are oral contraceptive pills and barrier methods, and each has substantial compliance problems which can be minimized with care and counselling. There is an urgent need for a wider range of options in Australia and for good information and publicity about them. Present progress in this direction gives some hope for the near future.

  16. Optimal Paths in Gliding Flight

    NASA Astrophysics Data System (ADS)

    Wolek, Artur

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  17. A multiobjective modeling approach to locate multi-compartment containers for urban-sorted waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tralhao, Lino, E-mail: lmlrt@inescc.p; Coutinho-Rodrigues, Joao, E-mail: coutinho@dec.uc.p; Alcada-Almeida, Luis, E-mail: alcada@inescc.p

    2010-12-15

    The location of multi-compartment sorted waste containers for recycling purposes in cities is an important problem in the context of urban waste management. The costs associated with those facilities and the impacts placed on populations are important concerns. This paper introduces a mixed-integer, multiobjective programming approach to identify the locations and capacities of such facilities. The approach incorporates an optimization model in a Geographical Information System (GIS)-based interactive decision support system that includes four objectives. The first objective minimizes the total investment cost; the second one minimizes the average distance from dwellings to the respective multi-compartment container; the last twomore » objectives address the 'pull' and 'push' characteristics of the decision problem, one by minimizing the number of individuals too close to any container, and the other by minimizing the number of dwellings too far from the respective multi-compartment container. The model determines the number of facilities to be opened, the respective container capacities, their locations, their respective shares of the total waste of each type to be collected, and the dwellings assigned to each facility. The approach proposed was tested with a case study for the historical center of Coimbra city, Portugal, where a large urban renovation project, addressing about 800 buildings, is being undertaken. This paper demonstrates that the models and techniques incorporated in the interactive decision support system (IDSS) can be used to assist a decision maker (DM) in analyzing this complex problem in a realistically sized urban application. Ten solutions consisting of different combinations of underground containers for the disposal of four types of sorted waste in 12 candidate sites, were generated. These solutions and tradeoffs among the objectives are presented to the DM via tables, graphs, color-coded maps and other graphics. The DM can then use this information to 'guide' the IDSS in identifying additional solutions of potential interest. Nevertheless, this research showed that a particular solution with a better objective balance can be identified. The actual sequence of additional solutions generated will depend upon the objectives and preferences of the DM in a specific application.« less

  18. Process Mining-Based Method of Designing and Optimizing the Layouts of Emergency Departments in Hospitals.

    PubMed

    Rismanchian, Farhood; Lee, Young Hoon

    2017-07-01

    This article proposes an approach to help designers analyze complex care processes and identify the optimal layout of an emergency department (ED) considering several objectives simultaneously. These objectives include minimizing the distances traveled by patients, maximizing design preferences, and minimizing the relocation costs. Rising demand for healthcare services leads to increasing demand for new hospital buildings as well as renovating existing ones. Operations management techniques have been successfully applied in both manufacturing and service industries to design more efficient layouts. However, high complexity of healthcare processes makes it challenging to apply these techniques in healthcare environments. Process mining techniques were applied to address the problem of complexity and to enhance healthcare process analysis. Process-related information, such as information about the clinical pathways, was extracted from the information system of an ED. A goal programming approach was then employed to find a single layout that would simultaneously satisfy several objectives. The layout identified using the proposed method improved the distances traveled by noncritical and critical patients by 42.2% and 47.6%, respectively, and minimized the relocation costs. This study has shown that an efficient placement of the clinical units yields remarkable improvements in the distances traveled by patients.

  19. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    PubMed

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  1. Pricing health benefits: a cost-minimization approach.

    PubMed

    Miller, Nolan H

    2005-09-01

    We study the role of health benefits in an employer's compensation strategy, given the overall goal of minimizing total compensation cost (wages plus health-insurance cost). When employees' health status is private information, the employer's basic benefit package consists of a base wage and a moderate health plan, with a generous plan available for an additional charge. We show that in setting the charge for the generous plan, a cost-minimizing employer should act as a monopolist who sells "health plan upgrades" to its workers, and we discuss ways tax policy can encourage efficiency under cost-minimization and alternative pricing rules.

  2. On a cost functional for H2/H(infinity) minimization

    NASA Technical Reports Server (NTRS)

    Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis

    1990-01-01

    A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.

  3. Optimization of power systems with voltage security constraints

    NASA Astrophysics Data System (ADS)

    Rosehart, William Daniel

    As open access market principles are applied to power systems, significant changes in their operation and control are occurring. In the new marketplace, power systems are operating under higher loading conditions as market influences demand greater attention to operating cost versus stability margins. Since stability continues to be a basic requirement in the operation of any power system, new tools are being considered to analyze the effect of stability on the operating cost of the system, so that system stability can be incorporated into the costs of operating the system. In this thesis, new optimal power flow (OPF) formulations are proposed based on multi-objective methodologies to optimize active and reactive power dispatch while maximizing voltage security in power systems. The effects of minimizing operating costs, minimizing reactive power generation and/or maximizing voltage stability margins are analyzed. Results obtained using the proposed Voltage Stability Constrained OPF formulations are compared and analyzed to suggest possible ways of costing voltage security in power systems. When considering voltage stability margins the importance of system modeling becomes critical, since it has been demonstrated, based on bifurcation analysis, that modeling can have a significant effect of the behavior of power systems, especially at high loading levels. Therefore, this thesis also examines the effects of detailed generator models and several exponential load models. Furthermore, because of its influence on voltage stability, a Static Var Compensator model is also incorporated into the optimization problems.

  4. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  5. Multi-objective generation scheduling with hybrid energy resources

    NASA Astrophysics Data System (ADS)

    Trivedi, Manas

    In economic dispatch (ED) of electric power generation, the committed generating units are scheduled to meet the load demand at minimum operating cost with satisfying all unit and system equality and inequality constraints. Generation of electricity from the fossil fuel releases several contaminants into the atmosphere. So the economic dispatch objective can no longer be considered alone due to the environmental concerns that arise from the emissions produced by fossil fueled electric power plants. This research is proposing the concept of environmental/economic generation scheduling with traditional and renewable energy sources. Environmental/economic dispatch (EED) is a multi-objective problem with conflicting objectives since emission minimization is conflicting with fuel cost minimization. Production and consumption of fossil fuel and nuclear energy are closely related to environmental degradation. This causes negative effects to human health and the quality of life. Depletion of the fossil fuel resources will also be challenging for the presently employed energy systems to cope with future energy requirements. On the other hand, renewable energy sources such as hydro and wind are abundant, inexhaustible and widely available. These sources use native resources and have the capacity to meet the present and the future energy demands of the world with almost nil emissions of air pollutants and greenhouse gases. The costs of fossil fuel and renewable energy are also heading in opposite directions. The economic policies needed to support the widespread and sustainable markets for renewable energy sources are rapidly evolving. The contribution of this research centers on solving the economic dispatch problem of a system with hybrid energy resources under environmental restrictions. It suggests an effective solution of renewable energy to the existing fossil fueled and nuclear electric utilities for the cheaper and cleaner production of electricity with hourly emission targets. Since minimizing the emissions and fuel cost are conflicting objectives, a practical approach based on multi-objective optimization is applied to obtain compromised solutions in a single simulation run using genetic algorithm. These solutions are known as non-inferior or Pareto-optimal solutions, graphically illustrated by the trade-off curves between criterions fuel cost and pollutant emission. The efficacy of the proposed approach is illustrated with the help of different sample test cases. This research would be useful for society, electric utilities, consultants, regulatory bodies, policy makers and planners.

  6. Causes and solutions to surface facilities upsets following acid stimulation in the Gulf of Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durham, D.K.; Stone, P.J.; Ali, S.A.

    1997-02-01

    This paper presents test data on the effects of acid and acid additives on emulsion and water treating in the Gulf of Mexico. This work also discusses the test methods developed to select acid additives and treating chemicals that will allow the producer to treat both oil and water more consistently and cost effectively while the acid flowback is in the system. It also presents system results that confirm the importance of the joint selection of acid and surface treating additives and show that significant cost savings can be gained by use of this process. Also discussed are the propermore » system application techniques for treating chemicals that can minimize surface treating problems caused by acid flowbacks. The results show that the proper selection and use of acid additives and surface treating products can eliminate or significantly reduce costly upsets in oil- and water-treating systems. Data on individual acid additives that impact water and oil treating are also presented. The results of this work are currently being used to solve produced-water- and oil-treating problems on offshore and onshore facilities in and around the Gulf of Mexico by reduction of production losses resulting from acid-flowback-related problems; reduction of the use and cost of tanks and barges used to segregate acid flowbacks; and development of effective methodology to select acid and surface treating additives that have resulted in lower overall treating costs.« less

  7. Autonomous Guidance Strategy for Spacecraft Formations and Reconfiguration Maneuvers

    NASA Astrophysics Data System (ADS)

    Wahl, Theodore P.

    A guidance strategy for autonomous spacecraft formation reconfiguration maneuvers is presented. The guidance strategy is presented as an algorithm that solves the linked assignment and delivery problems. The assignment problem is the task of assigning the member spacecraft of the formation to their new positions in the desired formation geometry. The guidance algorithm uses an auction process (also called an "auction algorithm''), presented in the dissertation, to solve the assignment problem. The auction uses the estimated maneuver and time of flight costs between the spacecraft and targets to create assignments which minimize a specific "expense'' function for the formation. The delivery problem is the task of delivering the spacecraft to their assigned positions, and it is addressed through one of two guidance schemes described in this work. The first is a delivery scheme based on artificial potential function (APF) guidance. APF guidance uses the relative distances between the spacecraft, targets, and any obstacles to design maneuvers based on gradients of potential fields. The second delivery scheme is based on model predictive control (MPC); this method uses a model of the system dynamics to plan a series of maneuvers designed to minimize a unique cost function. The guidance algorithm uses an analytic linearized approximation of the relative orbital dynamics, the Yamanaka-Ankersen state transition matrix, in the auction process and in both delivery methods. The proposed guidance strategy is successful, in simulations, in autonomously assigning the members of the formation to new positions and in delivering the spacecraft to these new positions safely using both delivery methods. This guidance algorithm can serve as the basis for future autonomous guidance strategies for spacecraft formation missions.

  8. Hybrid Stochastic Search Technique based Suboptimal AGC Regulator Design for Power System using Constrained Feedback Control Strategy

    NASA Astrophysics Data System (ADS)

    Ibraheem, Omveer, Hasan, N.

    2010-10-01

    A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.

  9. Optimization in fractional aircraft ownership

    NASA Astrophysics Data System (ADS)

    Septiani, R. D.; Pasaribu, H. M.; Soewono, E.; Fayalita, R. A.

    2012-05-01

    Fractional Aircraft Ownership is a new concept in flight ownership management system where each individual or corporation may own a fraction of an aircraft. In this system, the owners have privilege to schedule their flight according to their needs. Fractional management companies (FMC) manages all aspects of aircraft operations, including utilization of FMC's aircraft in combination of outsourced aircrafts. This gives the owners the right to enjoy the benefits of private aviations. However, FMC may have complicated business requirements that neither commercial airlines nor charter airlines faces. Here, optimization models are constructed to minimize the number of aircrafts in order to maximize the profit and to minimize the daily operating cost. In this paper, three kinds of demand scenarios are made to represent different flight operations from different types of fractional owners. The problems are formulated as an optimization of profit and a daily operational cost to find the optimum flight assignments satisfying the weekly and daily demand respectively from the owners. Numerical results are obtained by Genetic Algorithm method.

  10. System for decision analysis support on complex waste management issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shropshire, D.E.

    1997-10-01

    A software system called the Waste Flow Analysis has been developed and applied to complex environmental management processes for the United States Department of Energy (US DOE). The system can evaluate proposed methods of waste retrieval, treatment, storage, transportation, and disposal. Analysts can evaluate various scenarios to see the impacts to waste slows and schedules, costs, and health and safety risks. Decision analysis capabilities have been integrated into the system to help identify preferred alternatives based on a specific objectives may be to maximize the waste moved to final disposition during a given time period, minimize health risks, minimize costs,more » or combinations of objectives. The decision analysis capabilities can support evaluation of large and complex problems rapidly, and under conditions of variable uncertainty. The system is being used to evaluate environmental management strategies to safely disposition wastes in the next ten years and reduce the environmental legacy resulting from nuclear material production over the past forty years.« less

  11. Carpal tunnel syndrome, the search for a cost-effective surgical intervention: a randomised controlled trial.

    PubMed Central

    Lorgelly, Paula K.; Dias, Joseph J.; Bradley, Mary J.; Burke, Frank D.

    2005-01-01

    OBJECTIVE: There is insufficient evidence regarding the clinical and cost-effectiveness of surgical interventions for carpal tunnel syndrome. This study evaluates the cost, effectiveness and cost-effectiveness of minimally invasive surgery compared with conventional open surgery. PATIENTS AND METHODS: 194 sufferers (208 hands) of carpal tunnel syndrome were randomly assigned to each treatment option. A self-administered questionnaire assessed the severity of patients' symptoms and functional status pre- and postoperatively. Treatment costs were estimated from resource use and hospital financial data. RESULTS: Minimally invasive carpal tunnel decompression is marginally more effective than open surgery in terms of functional status, but not significantly so. Little improvement in symptom severity was recorded for either intervention. Minimally invasive surgery was found to be significantly more costly than open surgery. The incremental cost effectiveness ratio for functional status was estimated to be 197 UK pounds, such that a one percentage point improvement in functioning costs 197 UK pounds when using the minimally invasive technique. CONCLUSIONS: Minimally invasive carpal tunnel decompression appears to be more effective but more costly. Initial analysis suggests that the additional expense for such a small improvement in function and no improvement in symptoms would not be regarded as value-for-money, such that minimally invasive carpal tunnel release is unlikely to be considered a cost-effective alternative to the traditional open surgery procedure. PMID:15720906

  12. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  13. Energy-aware virtual network embedding in flexi-grid networks.

    PubMed

    Lin, Rongping; Luo, Shan; Wang, Haoran; Wang, Sheng

    2017-11-27

    Network virtualization technology has been proposed to allow multiple heterogeneous virtual networks (VNs) to coexist on a shared substrate network, which increases the utilization of the substrate network. Efficiently mapping VNs on the substrate network is a major challenge on account of the VN embedding (VNE) problem. Meanwhile, energy efficiency has been widely considered in the network design in terms of operation expenses and the ecological awareness. In this paper, we aim to solve the energy-aware VNE problem in flexi-grid optical networks. We provide an integer linear programming (ILP) formulation to minimize the electricity cost of each arriving VN request. We also propose a polynomial-time heuristic algorithm where virtual links are embedded sequentially to keep a reasonable acceptance ratio and maintain a low electricity cost. Numerical results show that the heuristic algorithm performs closely to the ILP for a small size network, and we also demonstrate its applicability to larger networks.

  14. Optimizing Wellfield Operation in a Variable Power Price Regime.

    PubMed

    Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus

    2016-01-01

    Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities. © 2015 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  15. Linear Matrix Inequality Method for a Quadratic Performance Index Minimization Problem with a class of Bilinear Matrix Inequality Conditions

    NASA Astrophysics Data System (ADS)

    Tanemura, M.; Chida, Y.

    2016-09-01

    There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.

  16. [Methodologies for estimating the indirect costs of traffic accidents].

    PubMed

    Carozzi, Soledad; Elorza, María Eugenia; Moscoso, Nebel Silvana; Ripari, Nadia Vanina

    2017-01-01

    Traffic accidents generate multiple costs to society, including those associated with the loss of productivity. However, there is no consensus about the most appropriate methodology for estimating those costs. The aim of this study was to review methods for estimating indirect costs applied in crash cost studies. A thematic review of the literature was carried out between 1995 and 2012 in PubMed with the terms cost of illness, indirect cost, road traffic injuries, productivity loss. For the assessment of costs we used the the human capital method, on the basis of the wage-income lost during the time of treatment and recovery of patients and caregivers. In the case of premature death or total disability, the discount rate was applied to obtain the present value of lost future earnings. The computed years arose by subtracting to life expectancy at birth the average age of those affected who are not incorporated into the economically active life. The interest in minimizing the problem is reflected in the evolution of the implemented methodologies. We expect that this review is useful to estimate efficiently the real indirect costs of traffic accidents.

  17. Optimisation des trajectoires d'un systeme de gestion de vol d'avions pour la reduction des couts de vol

    NASA Astrophysics Data System (ADS)

    Sidibe, Souleymane

    The implementation and monitoring of operational flight plans is a major occupation for a crew of commercial flights. The purpose of this operation is to set the vertical and lateral trajectories followed by airplane during phases of flight: climb, cruise, descent, etc. These trajectories are subjected to conflicting economical constraints: minimization of flight time and minimization of fuel consumed and environmental constraints. In its task of mission planning, the crew is assisted by the Flight Management System (FMS) which is used to construct the path to follow and to predict the behaviour of the aircraft along the flight plan. The FMS considered in our research, particularly includes an optimization model of flight only by calculating the optimal speed profile that minimizes the overall cost of flight synthesized by a criterion of cost index following a steady cruising altitude. However, the model based solely on optimization of the speed profile is not sufficient. It is necessary to expand the current optimization for simultaneous optimization of the speed and altitude in order to determine an optimum cruise altitude that minimizes the overall cost when the path is flown with the optimal speed profile. Then, a new program was developed. The latter is based on the method of dynamic programming invented by Bellman to solve problems of optimal paths. In addition, the improvement passes through research new patterns of trajectories integrating ascendant cruises and using the lateral plane with the effect of the weather: wind and temperature. Finally, for better optimization, the program takes into account constraint of flight domain of aircrafts which utilize the FMS.

  18. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  19. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  20. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  1. Optimal assignment of workers to supporting services in a hospital

    NASA Astrophysics Data System (ADS)

    Sawik, Bartosz; Mikulik, Jerzy

    2008-01-01

    Supporting services play an important role in health care institutions such as hospitals. This paper presents an application of operations research model for optimal allocation of workers among supporting services in a public hospital. The services include logistics, inventory management, financial management, operations management, medical analysis, etc. The optimality criterion of the problem is to minimize operations costs of supporting services subject to some specific constraints. The constraints represent specific conditions for resource allocation in a hospital. The overall problem is formulated as an integer program in the literature known as the assignment problem, where the decision variables represent the assignment of people to various jobs. The results of some computational experiments modeled on a real data from a selected Polish hospital are reported.

  2. Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization.

    PubMed

    Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar

    2017-03-01

    Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Reliability and cost analysis methods

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.

    1991-01-01

    In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.

  4. Hierarchical Control Using Networks Trained with Higher-Level Forward Models

    PubMed Central

    Wayne, Greg; Abbott, L.F.

    2015-01-01

    We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706

  5. Screening test recommendations for methicillin-resistant Staphylococcus aureus surveillance practices: A cost-minimization analysis.

    PubMed

    Whittington, Melanie D; Curtis, Donna J; Atherly, Adam J; Bradley, Cathy J; Lindrooth, Richard C; Campbell, Jonathan D

    2017-07-01

    To mitigate methicillin-resistant Staphylococcus aureus (MRSA) infections, intensive care units (ICUs) conduct surveillance through screening patients upon admission followed by adhering to isolation precautions. Two surveillance approaches commonly implemented are universal preemptive isolation and targeted isolation of only MRSA-positive patients. Decision analysis was used to calculate the total cost of universal preemptive isolation and targeted isolation. The screening test used as part of the surveillance practice was varied to identify which screening test minimized inappropriate and total costs. A probabilistic sensitivity analysis was conducted to evaluate the range of total costs resulting from variation in inputs. The total cost of the universal preemptive isolation surveillance practice was minimized when a polymerase chain reaction screening test was used ($82.51 per patient). Costs were $207.60 more per patient when a conventional culture was used due to the longer turnaround time and thus higher isolation costs. The total cost of the targeted isolation surveillance practice was minimized when chromogenic agar 24-hour testing was used ($8.54 per patient). Costs were $22.41 more per patient when polymerase chain reaction was used. For ICUs that preemptively isolate all patients, the use of a polymerase chain reaction screening test is recommended because it can minimize total costs by reducing inappropriate isolation costs. For ICUs that only isolate MRSA-positive patients, the use of chromogenic agar 24-hour testing is recommended to minimize total costs. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  6. Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.

    PubMed

    Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng

    2017-07-01

    In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  8. [Mathematical model of technical equipment of a clinical-diagnostic laboratory].

    PubMed

    Bukin, S I; Busygin, D V; Tilevich, M E

    1990-01-01

    The paper is concerned with the problems of technical equipment of standard clinico-diagnostic laboratories (CDL) in this country. The authors suggest a mathematic model that may minimize expenditures for laboratory studies. The model enables the following problems to be solved: to issue scientifically-based recommendations for technical equipment of CDL; to validate the medico-technical requirements for newly devised items; to select the optimum types of uniform items; to define optimal technical decisions at the stage of the design; to determine the lab assistant's labour productivity and the cost of some investigations; to compute the medical laboratory engineering requirement for treatment and prophylactic institutions of this country.

  9. Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects.

    PubMed

    Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao

    2016-01-01

    Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.

  10. A joint economic lot-sizing problem with fuzzy demand, defective items and environmental impacts

    NASA Astrophysics Data System (ADS)

    Jauhari, W. A.; Laksono, P. W.

    2017-11-01

    In this paper, a joint economic lot-sizing problem consisting of a vendor and a buyer was proposed. A buyer ordered products from a vendor to fulfill end customer’s demand. A produced a batch of products, and delivered it to the buyer. The production process in the vendor was imperfect and produced a number of defective products. Production rate was assumed to be adjustable to control the output of vendor’s production. A continuous review policy was adopted by the buyer to manage his inventory level. In addition, an average annual demand was considered to be fuzzy rather than constant. The proposed model contributed to the current inventory literature by allowing the inclusion of fuzzy annual demand, imperfect production emission cost, and adjustable production rate. The proposed model also considered carbon emission cost which was resulted from the transportation activity. A mathematical model was developed for obtaining the optimal ordering quantity, safety factor and the number of deliveries so the joint total cost was minimized. Furthermore, an iterative procedure was suggested to determine the optimal solutions.

  11. Logistical management and private sector involvement in reducing the cost of municipal solid waste collection service in the Tubas area of the West Bank.

    PubMed

    El-Hamouz, Amer M

    2008-01-01

    This paper addresses the problems of the municipal solid waste (MSW) collection system in the Tubas district of Palestine. More specifically, it addresses the often-voiced concerns pertaining to low efficiency as well as environmental problems. This was carried out through a systematic methodological approach. The paper illustrates how a private company applied a logistical management strategy, by rescheduling the MSW collection system, reallocating street solid waste containers and minimizing vehicle routing. The way in which the MSW collection timetable was rescheduled decreased the operating expenses and thus reduced MSW collection costs. All data needed to reschedule the collection timetable and optimize vehicle routing were based on actual field measurements. The new MSW collection timetable introduced by a private company was monitored for a period of a month. The new system resulted in an improvement in the MSW collection system by reducing the collection cost to a level that is socially acceptable (US dollars 3.75/family/month), as well as economically and environmentally sound.

  12. Multi-objective reverse logistics model for integrated computer waste management.

    PubMed

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  13. Municipal solid waste transportation optimisation with vehicle routing approach: case study of Pontianak City, West Kalimantan

    NASA Astrophysics Data System (ADS)

    Kamal, M. A.; Youlla, D.

    2018-03-01

    Municipal solid waste (MSW) transportation in Pontianak City becomes an issue that need to be tackled by the relevant agencies. The MSW transportation service in Pontianak City currently requires very high resources especially in vehicle usage. Increasing the number of fleets has not been able to increase service levels while garbage volume is growing every year along with population growth. In this research, vehicle routing optimization approach was used to find optimal and efficient routes of vehicle cost in transporting garbage from several Temporary Garbage Dump (TGD) to Final Garbage Dump (FGD). One of the problems of MSW transportation is that there is a TGD which exceed the the vehicle capacity and must be visited more than once. The optimal computation results suggest that the municipal authorities only use 3 vehicles from 5 vehicles provided with the total minimum cost of IDR. 778,870. The computation time to search optimal route and minimal cost is very time consuming. This problem is influenced by the number of constraints and decision variables that have are integer value.

  14. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip; Garg, Sanjay; Holowecky, Brian

    1992-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  15. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Garg, Sanjay; Holowecky, Brian R.

    1993-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  16. Discrete Optimization Model for Vehicle Routing Problem with Scheduling Side Cosntraints

    NASA Astrophysics Data System (ADS)

    Juliandri, Dedy; Mawengkang, Herman; Bu'ulolo, F.

    2018-01-01

    Vehicle Routing Problem (VRP) is an important element of many logistic systems which involve routing and scheduling of vehicles from a depot to a set of customers node. This is a hard combinatorial optimization problem with the objective to find an optimal set of routes used by a fleet of vehicles to serve the demands a set of customers It is required that these vehicles return to the depot after serving customers’ demand. The problem incorporates time windows, fleet and driver scheduling, pick-up and delivery in the planning horizon. The goal is to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the overall costs of all routes over the planning horizon. We model the problem as a linear mixed integer program. We develop a combination of heuristics and exact method for solving the model.

  17. Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles

    NASA Astrophysics Data System (ADS)

    Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi

    2012-09-01

    In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.

  18. Feedback control for unsteady flow and its application to the stochastic Burgers equation

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon; Temam, Roger; Moin, Parviz; Kim, John

    1993-01-01

    The study applies mathematical methods of control theory to the problem of control of fluid flow with the long-range objective of developing effective methods for the control of turbulent flows. Model problems are employed through the formalism and language of control theory to present the procedure of how to cast the problem of controlling turbulence into a problem in optimal control theory. Methods of calculus of variations through the adjoint state and gradient algorithms are used to present a suboptimal control and feedback procedure for stationary and time-dependent problems. Two types of controls are investigated: distributed and boundary controls. Several cases of both controls are numerically simulated to investigate the performances of the control algorithm. Most cases considered show significant reductions of the costs to be minimized. The dependence of the control algorithm on the time-descretization method is discussed.

  19. Make or buy decision model with multi-stage manufacturing process and supplier imperfect quality

    NASA Astrophysics Data System (ADS)

    Pratama, Mega Aria; Rosyidi, Cucuk Nur

    2017-11-01

    This research develops an make or buy decision model considering supplier imperfect quality. This model can be used to help companies make the right decision in case of make or buy component with the best quality and the least cost in multistage manufacturing process. The imperfect quality is one of the cost component that must be minimizing in this model. Component with imperfect quality, not necessarily defective. It still can be rework and used for assembly. This research also provide a numerical example and sensitivity analysis to show how the model work. We use simulation and help by crystal ball to solve the numerical problem. The sensitivity analysis result show that percentage of imperfect generally not affect to the model significantly, and the model is not sensitive to changes in these parameters. This is because the imperfect cost are smaller than overall total cost components.

  20. A holistic framework for design of cost-effective minimum water utilization network.

    PubMed

    Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N

    2008-07-01

    Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.

  1. Thermal Stability of Al2O3/Silicone Composites as High-Temperature Encapsulants

    NASA Astrophysics Data System (ADS)

    Yao, Yiying

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  2. Optimal Control of Distributed Energy Resources using Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.

    2012-07-22

    In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizingmore » costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.« less

  3. Productivity improvement using industrial engineering tools

    NASA Astrophysics Data System (ADS)

    Salaam, H. A.; How, S. B.; Faisae, M. F.

    2012-09-01

    Minimizing the number of defects is important to any company since it influence their outputs and profits. The aim of this paper is to study the implementation of industrial engineering tools in a manufacturing recycle paper box company. This study starts with reading the standard operation procedures and analyzing the process flow to get the whole idea on how to manufacture paper box. At the same time, observations at the production line were made to identify problem occurs in the production line. By using check sheet, the defect data from each station were collected and have been analyzed using Pareto Chart. From the chart, it is found that glue workstation shows the highest number of defects. Based on observation at the glue workstation, the existing method used to glue the box was inappropriate because the operator used a lot of glue. Then, by using cause and effect diagram, the root cause of the problem was identified and solutions to overcome the problem were proposed. There are three suggestions proposed to overcome this problem. Cost reduction for each solution was calculated and the best solution is using three hair drier to dry the sticky glue which produce only 6.4 defects in an hour with cost of RM 0.0224.

  4. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  5. Energy aware path planning in complex four dimensional environments

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Anjan

    This dissertation addresses the problem of energy-aware path planning for small autonomous vehicles. While small autonomous vehicles can perform missions that are too risky (or infeasible) for larger vehicles, the missions are limited by the amount of energy that can be carried on board the vehicle. Path planning techniques that either minimize energy consumption or exploit energy available in the environment can thus increase range and endurance. Path planning is complicated by significant spatial (and potentially temporal) variations in the environment. While the main focus is on autonomous aircraft, this research also addresses autonomous ground vehicles. Range and endurance of small unmanned aerial vehicles (UAVs) can be greatly improved by utilizing energy from the atmosphere. Wind can be exploited to minimize energy consumption of a small UAV. But wind, like any other atmospheric component , is a space and time varying phenomenon. To effectively use wind for long range missions, both exploration and exploitation of wind is critical. This research presents a kinematics based tree algorithm which efficiently handles the four dimensional (three spatial and time) path planning problem. The Kinematic Tree algorithm provides a sequence of waypoints, airspeeds, heading and bank angle commands for each segment of the path. The planner is shown to be resolution complete and computationally efficient. Global optimality of the cost function cannot be claimed, as energy is gained from the atmosphere, making the cost function inadmissible. However the Kinematic Tree is shown to be optimal up to resolution if the cost function is admissible. Simulation results show the efficacy of this planning method for a glider in complex real wind data. Simulation results verify that the planner is able to extract energy from the atmosphere enabling long range missions. The Kinematic Tree planning framework, developed to minimize energy consumption of UAVs, is applied for path planning in ground robots. In traditional path planning problem the focus is on obstacle avoidance and navigation. The optimal Kinematic Tree algorithm named Kinematic Tree* is shown to find optimal paths to reach the destination while avoiding obstacles. A more challenging path planning scenario arises for planning in complex terrain. This research shows how the Kinematic Tree* algorithm can be extended to find minimum energy paths for a ground vehicle in difficult mountainous terrain.

  6. The school bus routing and scheduling problem with transfers

    PubMed Central

    Doerner, Karl F.; Parragh, Sophie N.

    2015-01-01

    In this article, we study the school bus routing and scheduling problem with transfers arising in the field of nonperiodic public transportation systems. It deals with the transportation of pupils from home to their school in the morning taking the possibility that pupils may change buses into account. Allowing transfers has several consequences. On the one hand, it allows more flexibility in the bus network structure and can, therefore, help to reduce operating costs. On the other hand, transfers have an impact on the service level: the perceived service quality is lower due to the existence of transfers; however, at the same time, user ride times may be reduced and, thus, transfers may also have a positive impact on service quality. The main objective is the minimization of the total operating costs. We develop a heuristic solution framework to solve this problem and compare it with two solution concepts that do not consider transfers. The impact of transfers on the service level in terms of time loss (or user ride time) and the number of transfers is analyzed. Our results show that allowing transfers reduces total operating costs significantly while average and maximum user ride times are comparable to solutions without transfers. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 65(2), 180–203 2015 PMID:28163329

  7. Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Deyu

    2012-03-01

    In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.

  8. A cost-minimization analysis in minimally invasive spine surgery using a national cost scale method.

    PubMed

    Maillard, Nicolas; Buffenoir-Billet, Kevin; Hamel, Olivier; Lefranc, Benoit; Sellal, Olivier; Surer, Nathalie; Bord, Eric; Grimandi, Gael; Clouet, Johann

    2015-03-01

    The last decade has seen the emergence of minimally invasive spine surgery. However, there is still no consensus on whether percutaneous osteosynthesis (PO) or open surgery (OS) is more cost-effective in treatment of traumatic fractures and degenerative lesions. The objective of this study is to compare the clinical results and hospitalization costs of OS and PO for degenerative lesions and thoraco-lumbar fractures. This cost-minimization study was performed in patients undergoing OS or PO on a 36-month period. Patient data, surgical and clinical results, as well as cost data were collected and analyzed. The financial costs were calculated based on diagnosis related group reimbursement and the French national cost scale, enabling the evaluation of charges for each hospital stay. 46 patients were included in this cost analysis, 24 patients underwent OS and 22 underwent PO. No significant difference was found between surgical groups in terms of patient's clinical features and outcomes during the patient hospitalization. The use of PO was significantly associated with a decrease in Length Of Stay (LOS). The cost-minimization revealed that PO is associated with decreased hospital charges and shorten LOS for patients, with similar clinical outcomes and medical device cost to OS. This medico-economic study has leaded to choose preferentially the use of minimally invasive surgery techniques. This study also illustrates the discrepancy between the national health system reimbursement and real hospital charges. The medico-economic is becoming critical in the current context of sustainable health resource allocation. Copyright © 2015 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  9. Electricity system expansion studies to consider uncertainties and interactions in restructured markets

    NASA Astrophysics Data System (ADS)

    Jin, Shan

    This dissertation concerns power system expansion planning under different market mechanisms. The thesis follows a three paper format, in which each paper emphasizes a different perspective. The first paper investigates the impact of market uncertainties on a long term centralized generation expansion planning problem. The problem is modeled as a two-stage stochastic program with uncertain fuel prices and demands, which are represented as probabilistic scenario paths in a multi-period tree. Two measurements, expected cost (EC) and Conditional Value-at-Risk (CVaR), are used to minimize, respectively, the total expected cost among scenarios and the risk of incurring high costs in unfavorable scenarios. We sample paths from the scenario tree to reduce the problem scale and determine the sufficient number of scenarios by computing confidence intervals on the objective values. The second paper studies an integrated electricity supply system including generation, transmission and fuel transportation with a restructured wholesale electricity market. This integrated system expansion problem is modeled as a bi-level program in which a centralized system expansion decision is made in the upper level and the operational decisions of multiple market participants are made in the lower level. The difficulty of solving a bi-level programming problem to global optimality is discussed and three problem relaxations obtained by reformulation are explored. The third paper solves a more realistic market-based generation and transmission expansion problem. It focuses on interactions among a centralized transmission expansion decision and decentralized generation expansion decisions. It allows each generator to make its own strategic investment and operational decisions both in response to a transmission expansion decision and in anticipation of a market price settled by an Independent System Operator (ISO) market clearing problem. The model poses a complicated tri-level structure including an equilibrium problem with equilibrium constraints (EPEC) sub-problem. A hybrid iterative algorithm is proposed to solve the problem efficiently and reliably.

  10. The cost-constrained traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP.more » We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.« less

  11. Collective neurodynamic optimization for economic emission dispatch problem considering valve point effect in microgrid.

    PubMed

    Wang, Tiancai; He, Xing; Huang, Tingwen; Li, Chuandong; Zhang, Wei

    2017-09-01

    The economic emission dispatch (EED) problem aims to control generation cost and reduce the impact of waste gas on the environment. It has multiple constraints and nonconvex objectives. To solve it, the collective neurodynamic optimization (CNO) method, which combines heuristic approach and projection neural network (PNN), is attempted to optimize scheduling of an electrical microgrid with ten thermal generators and minimize the plus of generation and emission cost. As the objective function has non-derivative points considering valve point effect (VPE), differential inclusion approach is employed in the PNN model introduced to deal with them. Under certain conditions, the local optimality and convergence of the dynamic model for the optimization problem is analyzed. The capability of the algorithm is verified in a complicated situation, where transmission loss and prohibited operating zones are considered. In addition, the dynamic variation of load power at demand side is considered and the optimal scheduling of generators within 24 h is described. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Environment-Aware Production Scheduling for Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach

    PubMed Central

    Zhang, Rui

    2017-01-01

    The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars. PMID:29295603

  13. Environment-Aware Production Schedulingfor Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach.

    PubMed

    Zhang, Rui

    2017-12-25

    The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars.

  14. Costs and benefits of different methods of esophagectomy for esophageal cancer.

    PubMed

    Yanasoot, Alongkorn; Yolsuriyanwong, Kamtorn; Ruangsin, Sakchai; Laohawiriyakamol, Supparerk; Sunpaweravong, Somkiat

    2017-01-01

    Background A minimally invasive approach to esophagectomy is being used increasingly, but concerns remain regarding the feasibility, safety, cost, and outcomes. We performed an analysis of the costs and benefits of minimally invasive, hybrid, and open esophagectomy approaches for esophageal cancer surgery. Methods The data of 83 consecutive patients who underwent a McKeown's esophagectomy at Prince of Songkla University Hospital between January 2008 and December 2014 were analyzed. Open esophagectomy was performed in 54 patients, minimally invasive esophagectomy in 13, and hybrid esophagectomy in 16. There were no differences in patient characteristics among the 3 groups Minimally invasive esophagectomy was undertaken via a thoracoscopic-laparoscopic approach, hybrid esophagectomy via a thoracoscopic-laparotomy approach, and open esophagectomy by a thoracotomy-laparotomy approach. Results Minimally invasive esophagectomy required a longer operative time than hybrid or open esophagectomy ( p = 0.02), but these patients reported less postoperative pain ( p = 0.01). There were no significant differences in blood loss, intensive care unit stay, hospital stay, or postoperative complications among the 3 groups. Minimally invasive esophagectomy incurred higher operative and surgical material costs than hybrid or open esophagectomy ( p = 0.01), but there were no significant differences in inpatient care and total hospital costs. Conclusion Minimally invasive esophagectomy resulted in the least postoperative pain but the greatest operative cost and longest operative time. Open esophagectomy was associated with the lowest operative cost and shortest operative time but the most postoperative pain. Hybrid esophagectomy had a shorter learning curve while sharing the advantages of minimally invasive esophagectomy.

  15. Inexpensive cross-linked polymeric separators made from water soluble polymers

    NASA Technical Reports Server (NTRS)

    Hsu, L. C.; Sheibley, D. W.

    1979-01-01

    Polyvinyl alcohol (PVA) crosslinked chemically with aldehyde reagents produces membranes which demonstrate oxidation resistance, dimensional stability, low ionic resistivity, low zincate diffusivity, and low zinc dendrite penetration rate which make them suitable for use as alkaline battery separators. They are intrinsically low in cost and environmental health and safety problems associated with commercial production appear minimal. Preparation, property measurements, and cell test results in Ni/Zn and Ag/Zn cells are described and discussed.

  16. Finite difference schemes for long-time integration

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1993-01-01

    Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.

  17. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  18. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  19. Contributions of metabolic and temporal costs to human gait selection.

    PubMed

    Summerside, Erik M; Kram, Rodger; Ahmed, Alaa A

    2018-06-01

    Humans naturally select several parameters within a gait that correspond with minimizing metabolic cost. Much less is understood about the role of metabolic cost in selecting between gaits. Here, we asked participants to decide between walking or running out and back to different gait specific markers. The distance of the walking marker was adjusted after each decision to identify relative distances where individuals switched gait preferences. We found that neither minimizing solely metabolic energy nor minimizing solely movement time could predict how the group decided between gaits. Of our twenty participants, six behaved in a way that tended towards minimizing metabolic energy, while eight favoured strategies that tended more towards minimizing movement time. The remaining six participants could not be explained by minimizing a single cost. We provide evidence that humans consider not just a single movement cost, but instead a weighted combination of these conflicting costs with their relative contributions varying across participants. Individuals who placed a higher relative value on time ran faster than individuals who placed a higher relative value on metabolic energy. Sensitivity to temporal costs also explained variability in an individual's preferred velocity as a function of increasing running distance. Interestingly, these differences in velocity both within and across participants were absent in walking, possibly due to a steeper metabolic cost of transport curve. We conclude that metabolic cost plays an essential, but not exclusive role in gait decisions. © 2018 The Author(s).

  20. Environmental liability and the onshore oil and gas prospector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, J.A.; Davis, P.

    Environmental liability can be transferred to the oil or gas prospector along with the conveyance of an oil or gas lease. Should soil or groundwater contamination be discovered on a lease, an innocent owner or operator could be liable under state and federal environmental laws for court-ordered remediation costs if potentially responsible parties were unavailable or insolvent. Potential environmental liabilities can be minimized, however, by a preconveyance survey. Existing storage tanks, wells, pipelines, and other anthropogenic features on site should be inspected and photographically documented, as should evidence of previous spills or leaks such as discolored soil and distressed vegetation.more » Land use and ownership history can be documented from historical maps, aerial photographs, tax records, and even interviews with knowledgeable sources. Contaminated groundwater from offsite sources even miles away may migrate onto potential drill sites. Offsite reconnaissance and a review of the Environmental Protection Agency, state, and local environmental agency lists of contaminated sites in the area of the prospective lease provide information to help the potential lessee evaluate this risk. The cost to research and document potential environmental problems on or in the vicinity of a lease is a fraction of the cost required to develop an oil or gas prospect. Performing a preconveyance environmental survey may be the best way to minimize environmental liability and subsequent costs of cleanup and damages in court-ordered remediation.« less

  1. Constrained binary classification using ensemble learning: an application to cost-efficient targeted PrEP strategies.

    PubMed

    Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya

    2018-01-30

    Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    NASA Astrophysics Data System (ADS)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  3. Minimally invasive mitral valve surgery is associated with equivalent cost and shorter hospital stay when compared with traditional sternotomy.

    PubMed

    Atluri, Pavan; Stetson, Robert L; Hung, George; Gaffey, Ann C; Szeto, Wilson Y; Acker, Michael A; Hargrove, W Clark

    2016-02-01

    Mitral valve surgery is increasingly performed through minimally invasive approaches. There are limited data regarding the cost of minimally invasive mitral valve surgery. Moreover, there are no data on the specific costs associated with mitral valve surgery. We undertook this study to compare the costs (total and subcomponent) of minimally invasive mitral valve surgery relative to traditional sternotomy. All isolated mitral valve repairs performed in our health system from March 2012 through September 2013 were analyzed. To ensure like sets of patients, only those patients who underwent isolated mitral valve repairs with preoperative Society of Thoracic Surgeons scores of less than 4 were included in this study. A total of 159 patients were identified (sternotomy, 68; mini, 91). Total incurred direct cost was obtained from hospital financial records. Analysis demonstrated no difference in total cost (operative and postoperative) of mitral valve repair between mini and sternotomy ($25,515 ± $7598 vs $26,049 ± $11,737; P = .74). Operative costs were higher for the mini cohort, whereas postoperative costs were significantly lower. Postoperative intensive care unit and total hospital stays were both significantly shorter for the mini cohort. There were no differences in postoperative complications or survival between groups. Minimally invasive mitral valve surgery can be performed with overall equivalent cost and shorter hospital stay relative to traditional sternotomy. There is greater operative cost associated with minimally invasive mitral valve surgery that is offset by shorter intensive care unit and hospital stays. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  4. Climate targets and cost-effective climate stabilization pathways

    NASA Astrophysics Data System (ADS)

    Held, H.

    2015-08-01

    Climate economics has developed two main tools to derive an economically adequate response to the climate problem. Cost benefit analysis weighs in any available information on mitigation costs and benefits and thereby derives an "optimal" global mean temperature. Quite the contrary, cost effectiveness analysis allows deriving costs of potential policy targets and the corresponding cost- minimizing investment paths. The article highlights pros and cons of both approaches and then focusses on the implications of a policy that strives at limiting global warming to 2 °C compared to pre-industrial values. The related mitigation costs and changes in the energy sector are summarized according to the IPCC report of 2014. The article then points to conceptual difficulties when internalizing uncertainty in these types of analyses and suggests pragmatic solutions. Key statements on mitigation economics remain valid under uncertainty when being given the adequate interpretation. Furthermore, the expected economic value of perfect climate information is found to be on the order of hundreds of billions of Euro per year if a 2°-policy were requested. Finally, the prospects of climate policy are sketched.

  5. Load Balancing Unstructured Adaptive Grids for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1996-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.

  6. Sociology of the growth/no-growth debate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphrey, C.R.; Buttel, F.H.

    The properties of conservative, liberal, and radical patterns in social science are analyzed and applied to the growth/no-growth debate in environmental policy literature. The fact that conservatives work with an evolutionary model of society suggests that environmental problems are imperfections to be remedied by science, technology, and the free market. Liberals recognize the benefits and costs of growth, and they articulate ways to minimize the costs through state regulation and planning. Radicals argue for state ownership of the means of production and new cultural values about growth as the only effective environmental policies. This analysis closes with a discussion ofmore » the future of the growth debate in terms of these paradigms. 40 references.« less

  7. A Stochastic Total Least Squares Solution of Adaptive Filtering Problem

    PubMed Central

    Ahmad, Noor Atinah

    2014-01-01

    An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412

  8. Comparing cost-effectiveness of X-Stop with minimally invasive decompression in lumbar spinal stenosis: a randomized controlled trial.

    PubMed

    Lønne, Greger; Johnsen, Lars Gunnar; Aas, Eline; Lydersen, Stian; Andresen, Hege; Rønning, Roar; Nygaard, Øystein P

    2015-04-15

    Randomized clinical trial with 2-year follow-up. To compare the cost-effectiveness of X-stop to minimally invasive decompression in patients with symptomatic lumbar spinal stenosis. Lumbar spinal stenosis is the most common indication for operative treatment in elderly. Although surgery is more costly than nonoperative treatment, health outcomes for more than 2 years were shown to be significantly better. Surgical treatment with minimally invasive decompression is widely used. X-stop is introduced as another minimally invasive technique showing good results compared with nonoperative treatment. We enrolled 96 patients aged 50 to 85 years, with symptoms of neurogenic intermittent claudication within 250-m walking distance and 1- or 2-level lumbar spinal stenosis, randomized to either minimally invasive decompression or X-stop. Quality-adjusted life-years were based on EuroQol EQ-5D. The hospital unit costs were estimated by means of the top-down approach. Each cost unit was converted into a monetary value by dividing the overall cost by the amount of cost units produced. The analysis of costs and health outcomes is presented by the incremental cost-effectiveness ratio. The study was terminated after a midway interim analysis because of significantly higher reoperation rate in the X-stop group (33%). The incremental cost for X-stop compared with minimally invasive decompression was &OV0556;2832 (95% confidence interval: 1886-3778), whereas the incremental health gain was 0.11 quality-adjusted life-year (95% confidence interval: -0.01 to 0.23). Based on the incremental cost and effect, the incremental cost-effectiveness ratio was &OV0556;25,700. The majority of the bootstrap samples displayed in the northeast corner of the cost-effectiveness plane, giving a 50% likelihood that X-stop is cost-effective at the extra cost of &OV0556;25,700 (incremental cost-effectiveness ratio) for a quality-adjusted life-year. The significantly higher cost of X-stop is mainly due to implant cost and the significantly higher reoperation rate. 2.

  9. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  10. [A future image of clinical inspection from health economics].

    PubMed

    Kakihara, Hiroaki

    2006-06-01

    Do you let medical costs increase in proportion to the growth rate of GDP? A way of thinking of the Council on Economic and Fiscal Policy. Should we exclude public medical insurance? It is not a problem, it is an absolute sum if you are effective. If there is no insurance, and individuals pay the total amount, there is no problem, but it is impossible. Economic development will cease if there is no insurance. As medical personnel, to offer good medical care with an appropriate cost. An appeal to the nation is necessary. Economic technical evaluation to identify a cheap method for each clinical inspection. Does medical insurance have a deficit? I. Japanese Health insurance system. (1) Health insurance union. When you look at the contribution money, it is originally 2,479,800,000,000 yen, with premium income and a profit of 45%. (2) Government management health insurance. When you look at the contribution money, it is originally 2,163,300,000,000 yen, with premium income and a profit of 36%. (1) + (2) Employed insurance meter. (3) Mutual aid. (4) National Health Insurance. II. A clinical economic method. III. Expense of medical care and its effect. A. Expense. B. A medical economic technical evaluation method. 1. Cost-effectiveness analysis CEA. 2. Cost utility analysis CUA. 3. Cost-benefit analysis CBA. 4. Expense minimization analysis.

  11. Vaccination and treatment as control interventions in an infectious disease model with their cost optimization

    NASA Astrophysics Data System (ADS)

    Kumar, Anuj; Srivastava, Prashant K.

    2017-03-01

    In this work, an optimal control problem with vaccination and treatment as control policies is proposed and analysed for an SVIR model. We choose vaccination and treatment as control policies because both these interventions have their own practical advantage and ease in implementation. Also, they are widely applied to control or curtail a disease. The corresponding total cost incurred is considered as weighted combination of costs because of opportunity loss due to infected individuals and costs incurred in providing vaccination and treatment. The existence of optimal control paths for the problem is established and guaranteed. Further, these optimal paths are obtained analytically using Pontryagin's Maximum Principle. We analyse our results numerically to compare three important strategies of proposed controls, viz.: vaccination only; with both treatment and vaccination; and treatment only. We note that first strategy (vaccination only) is less effective as well as expensive. Though, for a highly effective vaccine, vaccination alone may also work well in comparison with treatment only strategy. Among all the strategies, we observe that implementation of both treatment and vaccination is most effective and less expensive. Moreover, in this case the infective population is found to be relatively very low. Thus, we conclude that the comprehensive effect of vaccination and treatment not only minimizes cost burden due to opportunity loss and applied control policies but also keeps a tab on infective population.

  12. From Data to Images:. a Shape Based Approach for Fluorescence Tomography

    NASA Astrophysics Data System (ADS)

    Dorn, O.; Prieto, K. E.

    2012-12-01

    Fluorescence tomography is treated as a shape reconstruction problem for a coupled system of two linear transport equations in 2D. The shape evolution is designed in order to minimize the least squares data misfit cost functional either in the excitation frequency or in the emission frequency. Furthermore, a level set technique is employed for numerically modelling the evolving shapes. Numerical results are presented which demonstrate the performance of this novel technique in the situation of noisy simulated data in 2D.

  13. The high energy astronomy observatories

    NASA Technical Reports Server (NTRS)

    Neighbors, A. K.; Doolittle, R. F.; Halpers, R. E.

    1977-01-01

    The forthcoming NASA project of orbiting High Energy Astronomy Observatories (HEAO's) designed to probe the universe by tracing celestial radiations and particles is outlined. Solutions to engineering problems concerning HEAO's which are integrated, yet built to function independently are discussed, including the onboard digital processor, mirror assembly and the thermal shield. The principle of maximal efficiency with minimal cost and the potential capability of the project to provide explanations to black holes, pulsars and gamma-ray bursts are also stressed. The first satellite is scheduled for launch in April 1977.

  14. Augmenting white cane reliability using smart glove for visually impaired people.

    PubMed

    Bernieri, Giuseppe; Faramondi, Luca; Pascucci, Federica

    2015-08-01

    The independent mobility problem of visually impaired people has been an active research topic in biomedical engineering: although many smart tools have been proposed, traditional tools (e.g., the white cane) continue to play a prominent role. In this paper a low cost smart glove is presented: the key idea is to minimize the impact in using it by combining the traditional tools with a technological device able to improve the movement performance of the visually impaired people.

  15. Experimental and Theoretical Results in Output Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Leang, K.; Devasia, S.

    1998-01-01

    In this paper we study the optimal redesign of output trajectories for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectores, that achieve tracking of the required output may cause excessive vibrations in the structure. We pose and solve this problem, in the context of linear systems, as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  16. Technical standards for micro sensors in surgery and minimally invasive therapy.

    PubMed

    Neuder; Dehm

    2004-04-01

    The development of medical applications is fuelled in the context of steadily growing needs and the requirement of lowering overall costs. Micro systems will have an extremely important impact on medical technology in the future. The great challenges for the wider use of micro structures in health applications are biocompatibility and mass production. Especially small and medium-sized enterprises (SMEs) need help to overcome these problems by free access to knowledge, the availability of standards and contacts to partners.

  17. A heuristic for suffix solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilgory, A.; Gajski, D.D.

    1986-01-01

    The suffix problem has appeared in solutions of recurrence systems for parallel and pipelined machines and more recently in the design of gate and silicon compilers. In this paper the authors present two algorithms. The first algorithm generates parallel suffix solutions with minimum cost for a given length, time delay, availability of initial values, and fanout. This algorithm generates a minimal solution for any length n and depth range log/sub 2/ N to N. The second algorithm reduces the size of the solutions generated by the first algorithm.

  18. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  19. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  20. Reducing air pollutant emissions at airports by controlling aircraft ground operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelinas, C.G.; Fan, H.S.L.

    1979-02-01

    Average-day carbon monoxide, total hydrocarbon, and NO/sub x/ aircraft emissions and fuel use estimates (apportioned to takeoff, taxi, idle, and landing) for departure and arrival at Los Angeles and San Francisco International Airports were compared with emissions level and fuel use estimates for four emission reduction strategies (tow aircraft between runways and gates, shutdown one engine during taxiing, control departure time, and assign runways to minimize taxiing distance). The best strategy, the shutdown of one engine while taxiing, produces substantial emission reductions, cost benefits owing to fuel savings, and no apparent safety problems; aircraft towing reduced emissions significantly, but introducedmore » a number of safety problems.« less

  1. Economic aspects of agricultural and food biosecurity.

    PubMed

    Hennessy, David A

    2008-03-01

    Concerns about biosecurity in the food system raise a variety of issues about how the system is presently organized, why it might be vulnerable, what we could reasonably do to better secure it, and the costs of doing so. Emphasizing the role of incentives in efficient resource allocation, this article considers economic dimensions of three aspects of the general problem. One is the global problem, or the way biosecurity measures can affect how countries relate to each other and the global consequences that result. Another is how to best manage the immediate aftermath of a realized threat in order to minimize damage. The third is how to seek to prevent realization of the threat. Some policy alternatives are presented.

  2. On the Impact of Local Taxes in a Set Cover Game

    NASA Astrophysics Data System (ADS)

    Escoffier, Bruno; Gourvès, Laurent; Monnot, Jérôme

    Given a collection C of weighted subsets of a ground set E, the SET cover problem is to find a minimum weight subset of C which covers all elements of E. We study a strategic game defined upon this classical optimization problem. Every element of E is a player which chooses one set of C where it appears. Following a public tax function, every player is charged a fraction of the weight of the set that it has selected. Our motivation is to design a tax function having the following features: it can be implemented in a distributed manner, existence of an equilibrium is guaranteed and the social cost for these equilibria is minimized.

  3. Restoration ecology: two-sex dynamics and cost minimization.

    PubMed

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  4. Restoration Ecology: Two-Sex Dynamics and Cost Minimization

    PubMed Central

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model’s equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost. PMID:24204810

  5. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  6. Remediation System Design Optimization: Field Demonstration at the Umatilla Army Deport

    NASA Astrophysics Data System (ADS)

    Zheng, C.; Wang, P. P.

    2002-05-01

    Since the early 1980s, many researchers have shown that the simulation-optimization (S/O) approach is superior to the traditional trial-and-error method for designing cost-effective groundwater pump-and-treat systems. However, the application of the S/O approach to real field problems has remained limited. This paper describes the application of a new general simulation-optimization code to optimize an existing pump-and-treat system at the Umatilla Army Depot in Oregon, as part of a field demonstration project supported by the Environmental Security Technology Certification Program (ESTCP). Two optimization formulations were developed to minimize the total capital and operational costs under the current and possibly expanded treatment plant capacities. A third formulation was developed to minimize the total contaminant mass of RDX and TNT remaining in the shallow aquifer by the end of the project duration. For the first two formulations, this study produced an optimal pumping strategy that would achieve the cleanup goal in 4 years with a total cost of 1.66 million US dollars in net present value. For comparison, the existing design in operation was calculated to require 17 years for cleanup with a total cost of 3.83 million US dollars in net present value. Thus, the optimal pumping strategy represents a reduction of 13 years in cleanup time and a reduction of 56.6 percent in the expected total expenditure. For the third formulation, this study identified an optimal dynamic pumping strategy that would reduce the total mass remaining in the shallow aquifer by 89.5 percent compared with that calculated for the existing design. In spite of their intensive computational requirements, this study shows that the global optimization techniques including tabu search and genetic algorithms can be applied successfully to large-scale field problems involving multiple contaminants and complex hydrogeological conditions.

  7. Solving large scale unit dilemma in electricity system by applying commutative law

    NASA Astrophysics Data System (ADS)

    Legino, Supriadi; Arianto, Rakhmat

    2018-03-01

    The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.

  8. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  9. One-dimensional Gromov minimal filling problem

    NASA Astrophysics Data System (ADS)

    Ivanov, Alexandr O.; Tuzhilin, Alexey A.

    2012-05-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  10. AESOPS: a randomised controlled trial of the clinical effectiveness and cost-effectiveness of opportunistic screening and stepped care interventions for older hazardous alcohol users in primary care.

    PubMed

    Watson, J M; Crosby, H; Dale, V M; Tober, G; Wu, Q; Lang, J; McGovern, R; Newbury-Birch, D; Parrott, S; Bland, J M; Drummond, C; Godfrey, C; Kaner, E; Coulton, S

    2013-06-01

    There is clear evidence of the detrimental impact of hazardous alcohol consumption on the physical and mental health of the population. Estimates suggest that hazardous alcohol consumption annually accounts for 150,000 hospital admissions and between 15,000 and 22,000 deaths in the UK. In the older population, hazardous alcohol consumption is associated with a wide range of physical, psychological and social problems. There is evidence of an association between increased alcohol consumption and increased risk of coronary heart disease, hypertension and haemorrhagic and ischaemic stroke, increased rates of alcohol-related liver disease and increased risk of a range of cancers. Alcohol is identified as one of the three main risk factors for falls. Excessive alcohol consumption in older age can also contribute to the onset of dementia and other age-related cognitive deficits and is implicated in one-third of all suicides in the older population. To compare the clinical effectiveness and cost-effectiveness of a stepped care intervention against a minimal intervention in the treatment of older hazardous alcohol users in primary care. A multicentre, pragmatic, two-armed randomised controlled trial with an economic evaluation. General practices in primary care in England and Scotland between April 2008 and October 2010. Adults aged ≥ 55 years scoring ≥ 8 on the Alcohol Use Disorders Identification Test (10-item) (AUDIT) were eligible. In total, 529 patients were randomised in the study. The minimal intervention group received a 5-minute brief advice intervention with the practice or research nurse involving feedback of the screening results and discussion regarding the health consequences of continued hazardous alcohol consumption. Those in the stepped care arm initially received a 20-minute session of behavioural change counselling, with referral to step 2 (motivational enhancement therapy) and step 3 (local specialist alcohol services) if indicated. Sessions were recorded and rated to ensure treatment fidelity. The primary outcome was average drinks per day (ADD) derived from extended AUDIT--Consumption (3-item) (AUDIT-C) at 12 months. Secondary outcomes were AUDIT-C score at 6 and 12 months; alcohol-related problems assessed using the Drinking Problems Index (DPI) at 6 and 12 months; health-related quality of life assessed using the Short Form Questionnaire-12 items (SF-12) at 6 and 12 months; ADD at 6 months; quality-adjusted life-years (QALYs) (for cost-utility analysis derived from European Quality of Life-5 Dimensions); and health and social care resource use associated with the two groups. Both groups reduced alcohol consumption between baseline and 12 months. The difference between groups in log-transformed ADD at 12 months was very small, at 0.025 [95% confidence interval (CI)--0.060 to 0.119], and not statistically significant. At month 6 the stepped care group had a lower ADD, but again the difference was not statistically significant. At months 6 and 12, the stepped care group had a lower DPI score, but this difference was not statistically significant at the 5% level. The stepped care group had a lower SF-12 mental component score and lower physical component score at month 6 and month 12, but these differences were not statistically significant at the 5% level. The overall average cost per patient, taking into account health and social care resource use, was £488 [standard deviation (SD) £826] in the stepped care group and £482 (SD £826) in the minimal intervention group at month 6. The mean QALY gains were slightly greater in the stepped care group than in the minimal intervention group, with a mean difference of 0.0058 (95% CI -0.0018 to 0.0133), generating an incremental cost-effectiveness ratio (ICER) of £1100 per QALY gained. At month 12, participants in the stepped care group incurred fewer costs, with a mean difference of -£194 (95% CI -£585 to £198), and had gained 0.0117 more QALYs (95% CI -0.0084 to 0.0318) than the control group. Therefore, from an economic perspective the minimal intervention was dominated by stepped care but, as would be expected given the effectiveness results, the difference was small and not statistically significant. Stepped care does not confer an advantage over minimal intervention in terms of reduction in alcohol consumption at 12 months post intervention when compared with a 5-minute brief (minimal) intervention. This trial is registered as ISRCTN52557360. This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 17, No. 25. See the HTA programme website for further project information.

  11. Vehicle-Routing Optimization for Municipal Solid Waste Collection Using Genetic Algorithm: The Case of Southern Nablus City

    NASA Astrophysics Data System (ADS)

    Assaf, Ramiz; Saleh, Yahya

    2017-09-01

    Municipalities are responsible for solid waste collectiont for environmental, social and economic purposes. Practices of municipalities should be effective and efficient, with the objectives of reducing the total incurred costs in the solid waste collection network concurrently achieving the highest service level. This study aims at finding the best routes of solid waste collection network in Nablus city-Palestine. More specifically, the study seeks the optimal route that minimizes the total travelled distance by the trucks and hence the resulted costs. The current situation is evaluated and the problem is modelled as a Vehicle Routing Problem (VRP). The VRP is then optimized via a genetic algorithm. Specifically, compared to the current situation, the trucks total travelled distance was reduced by 66%, whereas the collection time was reduced from 7 hours per truck-trip to 2.3 hours. The findings of this study is useful for all municipality policy makers that are responsible for solid waste collection.

  12. Multi-Criteria Optimization of the Deployment of a Grid for Rural Electrification Based on a Heuristic Method

    NASA Astrophysics Data System (ADS)

    Ortiz-Matos, L.; Aguila-Tellez, A.; Hincapié-Reyes, R. C.; González-Sanchez, J. W.

    2017-07-01

    In order to design electrification systems, recent mathematical models solve the problem of location, type of electrification components, and the design of possible distribution microgrids. However, due to the amount of points to be electrified increases, the solution to these models require high computational times, thereby becoming unviable practice models. This study posed a new heuristic method for the electrification of rural areas in order to solve the problem. This heuristic algorithm presents the deployment of rural electrification microgrids in the world, by finding routes for optimal placement lines and transformers in transmission and distribution microgrids. The challenge is to obtain a display with equity in losses, considering the capacity constraints of the devices and topology of the land at minimal economic cost. An optimal scenario ensures the electrification of all neighbourhoods to a minimum investment cost in terms of the distance between electric conductors and the amount of transformation devices.

  13. A distributed algorithm for demand-side management: Selling back to the grid.

    PubMed

    Latifi, Milad; Khalili, Azam; Rastegarnia, Amir; Zandi, Sajad; Bazzi, Wael M

    2017-11-01

    Demand side energy consumption scheduling is a well-known issue in the smart grid research area. However, there is lack of a comprehensive method to manage the demand side and consumer behavior in order to obtain an optimum solution. The method needs to address several aspects, including the scale-free requirement and distributed nature of the problem, consideration of renewable resources, allowing consumers to sell electricity back to the main grid, and adaptivity to a local change in the solution point. In addition, the model should allow compensation to consumers and ensurance of certain satisfaction levels. To tackle these issues, this paper proposes a novel autonomous demand side management technique which minimizes consumer utility costs and maximizes consumer comfort levels in a fully distributed manner. The technique uses a new logarithmic cost function and allows consumers to sell excess electricity (e.g. from renewable resources) back to the grid in order to reduce their electric utility bill. To develop the proposed scheme, we first formulate the problem as a constrained convex minimization problem. Then, it is converted to an unconstrained version using the segmentation-based penalty method. At each consumer location, we deploy an adaptive diffusion approach to obtain the solution in a distributed fashion. The use of adaptive diffusion makes it possible for consumers to find the optimum energy consumption schedule with a small number of information exchanges. Moreover, the proposed method is able to track drifts resulting from changes in the price parameters and consumer preferences. Simulations and numerical results show that our framework can reduce the total load demand peaks, lower the consumer utility bill, and improve the consumer comfort level.

  14. New Polyazine-Bridged RuII,RhIII and RuII,RhI Supramolecular Photocatalysts for Water Reduction to Hydrogen Applicable for Solar Energy Conversion and Mechanistic Investigation of the Photocatalytic Cycle

    NASA Astrophysics Data System (ADS)

    Zhou, Rongwei

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  15. Texture mapping via optimal mass transport.

    PubMed

    Dominitz, Ayelet; Tannenbaum, Allen

    2010-01-01

    In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.

  16. A knowledge-based approach to configuration layout, justification, and documentation

    NASA Technical Reports Server (NTRS)

    Craig, F. G.; Cutts, D. E.; Fennel, T. R.; Case, C.; Palmer, J. R.

    1990-01-01

    The design, development, and implementation is described of a prototype expert system which could aid designers and system engineers in the placement of racks aboard modules on Space Station Freedom. This type of problem is relevant to any program with multiple constraints and requirements demanding solutions which minimize usage of limited resources. This process is generally performed by a single, highly experienced engineer who integrates all the diverse mission requirements and limitations, and develops an overall technical solution which meets program and system requirements with minimal cost, weight, volume, power, etc. This system architect performs an intellectual integration process in which the underlying design rationale is often not fully documented. This is a situation which lends itself to an expert system solution for enhanced consistency, thoroughness, documentation, and change assessment capabilities.

  17. A Knowledge-Based Approach to Configuration Layout, Justification, and Documentation

    NASA Technical Reports Server (NTRS)

    Craig, F. G.; Cutts, D. E.; Fennel, T. R.; Case, C. M.; Palmer, J. R.

    1991-01-01

    The design, development, and implementation of a prototype expert system which could aid designers and system engineers in the placement of racks aboard modules on the Space Station Freedom are described. This type of problem is relevant to any program with multiple constraints and requirements demanding solutions which minimize usage of limited resources. This process is generally performed by a single, highly experienced engineer who integrates all the diverse mission requirements and limitations, and develops an overall technical solution which meets program and system requirements with minimal cost, weight, volume, power, etc. This system architect performs an intellectual integration process in which the underlying design rationale is often not fully documented. This is a situation which lends itself to an expert system solution for enhanced consistency, thoroughness, documentation, and change assessment capabilities.

  18. A Space-Time Network-Based Modeling Framework for Dynamic Unmanned Aerial Vehicle Routing in Traffic Incident Monitoring Applications

    PubMed Central

    Zhang, Jisheng; Jia, Limin; Niu, Shuyun; Zhang, Fan; Tong, Lu; Zhou, Xuesong

    2015-01-01

    It is essential for transportation management centers to equip and manage a network of fixed and mobile sensors in order to quickly detect traffic incidents and further monitor the related impact areas, especially for high-impact accidents with dramatic traffic congestion propagation. As emerging small Unmanned Aerial Vehicles (UAVs) start to have a more flexible regulation environment, it is critically important to fully explore the potential for of using UAVs for monitoring recurring and non-recurring traffic conditions and special events on transportation networks. This paper presents a space-time network- based modeling framework for integrated fixed and mobile sensor networks, in order to provide a rapid and systematic road traffic monitoring mechanism. By constructing a discretized space-time network to characterize not only the speed for UAVs but also the time-sensitive impact areas of traffic congestion, we formulate the problem as a linear integer programming model to minimize the detection delay cost and operational cost, subject to feasible flying route constraints. A Lagrangian relaxation solution framework is developed to decompose the original complex problem into a series of computationally efficient time-dependent and least cost path finding sub-problems. Several examples are used to demonstrate the results of proposed models in UAVs’ route planning for small and medium-scale networks. PMID:26076404

  19. Graph edit distance from spectral seriation.

    PubMed

    Robles-Kelly, Antonio; Hancock, Edwin R

    2005-03-01

    This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.

  20. Column generation algorithms for virtual network embedding in flexi-grid optical networks.

    PubMed

    Lin, Rongping; Luo, Shan; Zhou, Jingwei; Wang, Sheng; Chen, Bin; Zhang, Xiaoning; Cai, Anliang; Zhong, Wen-De; Zukerman, Moshe

    2018-04-16

    Network virtualization provides means for efficient management of network resources by embedding multiple virtual networks (VNs) to share efficiently the same substrate network. Such virtual network embedding (VNE) gives rise to a challenging problem of how to optimize resource allocation to VNs and to guarantee their performance requirements. In this paper, we provide VNE algorithms for efficient management of flexi-grid optical networks. We provide an exact algorithm aiming to minimize the total embedding cost in terms of spectrum cost and computation cost for a single VN request. Then, to achieve scalability, we also develop a heuristic algorithm for the same problem. We apply these two algorithms for a dynamic traffic scenario where many VN requests arrive one-by-one. We first demonstrate by simulations for the case of a six-node network that the heuristic algorithm obtains very close blocking probabilities to exact algorithm (about 0.2% higher). Then, for a network of realistic size (namely, USnet) we demonstrate that the blocking probability of our new heuristic algorithm is about one magnitude lower than a simpler heuristic algorithm, which was a component of an earlier published algorithm.

  1. An Extended EPQ-Based Problem with a Discontinuous Delivery Policy, Scrap Rate, and Random Breakdown

    PubMed Central

    Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P.

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results. PMID:25821853

  2. An extended EPQ-based problem with a discontinuous delivery policy, scrap rate, and random breakdown.

    PubMed

    Chiu, Singa Wang; Lin, Hong-Dar; Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results.

  3. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  4. Finding the optimal lengths for three branches at a junction.

    PubMed

    Woldenberg, M J; Horsfield, K

    1983-09-21

    This paper presents an exact analytical solution to the problem of locating the junction point between three branches so that the sum of the total costs of the branches is minimized. When the cost per unit length of each branch is known the angles between each pair of branches can be deduced following reasoning first introduced to biology by Murray. Assuming the outer ends of each branch are fixed, the location of the junction and the length of each branch are then deduced using plane geometry and trigonometry. The model has applications in determining the optimal cost of a branch or branches at a junction. Comparing the optimal to the actual cost of a junction is a new way to compare cost models for goodness of fit to actual junction geometry. It is an unambiguous measure and is superior to comparing observed and optimal angles between each daughter and the parent branch. We present data for 199 junctions in the pulmonary arteries of two human lungs. For the branches at each junction we calculated the best fitting value of x from the relationship that flow alpha (radius)x. We found that the value of x determined whether a junction was best fitted by a surface, volume, drag or power minimization model. While economy of explanation casts doubt that four models operate simultaneously, we found that optimality may still operate, since the angle to the major daughter is less than the angle to the minor daughter. Perhaps optimality combined with a space filling branching pattern governs the branching geometry of the pulmonary artery.

  5. Space station preliminary design report

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The results of a 3 month preliminary design and analysis effort is presented. The configuration that emerged consists of a very stiff deployable truss structure with an overall triangular cross section having universal modules attached at the apexes. Sufficient analysis was performed to show feasibility of the configuration. An evaluation of the structure shows that desirable attributes of the configuration are: (1) the solar cells, radiators, and antennas will be mounted to stiff structure to minimize control problems during orbit maintenance and correction, docking, and attitude control; (2) large flat areas are available for mounting and servicing of equipment; (3) Large mass items can be mounted near the center of gravity of the system to minimize gravity gradient torques; (4) the trusses are lightweight structures and can be transported into orbit in one Shuttle flight; (5) the trusses are expandable and will require a minimum of EVA; and (6) the modules are anticipated to be structurally identical except for internal equipment to minimize cost.

  6. New mathematical modeling for a location-routing-inventory problem in a multi-period closed-loop supply chain in a car industry

    NASA Astrophysics Data System (ADS)

    Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.

    2017-11-01

    This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.

  7. Gene Architectures that Minimize Cost of Gene Expression.

    PubMed

    Frumkin, Idan; Schirman, Dvir; Rotman, Aviv; Li, Fangfei; Zahavi, Liron; Mordret, Ernest; Asraf, Omer; Wu, Song; Levy, Sasha F; Pilpel, Yitzhak

    2017-01-05

    Gene expression burdens cells by consuming resources and energy. While numerous studies have investigated regulation of expression level, little is known about gene design elements that govern expression costs. Here, we ask how cells minimize production costs while maintaining a given protein expression level and whether there are gene architectures that optimize this process. We measured fitness of ∼14,000 E. coli strains, each expressing a reporter gene with a unique 5' architecture. By comparing cost-effective and ineffective architectures, we found that cost per protein molecule could be minimized by lowering transcription levels, regulating translation speeds, and utilizing amino acids that are cheap to synthesize and that are less hydrophobic. We then examined natural E. coli genes and found that highly expressed genes have evolved more forcefully to minimize costs associated with their expression. Our study thus elucidates gene design elements that improve the economy of protein expression in natural and heterologous systems. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Array distribution in data-parallel programs

    NASA Technical Reports Server (NTRS)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.

    1994-01-01

    We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.

  9. A methodology for commonality analysis, with applications to selected space station systems

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    1989-01-01

    The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.

  10. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  11. Traffic-engineering-aware shortest-path routing and its application in IP-over-WDM networks [Invited

    NASA Astrophysics Data System (ADS)

    Lee, Youngseok; Mukherjee, Biswanath

    2004-03-01

    Single shortest-path routing is known to perform poorly for Internet traffic engineering (TE) where the typical optimization objective is to minimize the maximum link load. Splitting traffic uniformly over equal-cost multiple shortest paths in open shortest path first and intermediate system-intermediate system protocols does not always minimize the maximum link load when multiple paths are not carefully selected for the global traffic demand matrix. However, a TE-aware shortest path among all the equal-cost multiple shortest paths between each ingress-egress pair can be selected such that the maximum link load is significantly reduced. IP routers can use the globally optimal TE-aware shortest path without any change to existing routing protocols and without any serious configuration overhead. While calculating TE-aware shortest paths, the destination-based forwarding constraint at a node should be satisfied, because an IP router will forward a packet to the next hop toward the destination by looking up the destination prefix. We present a mathematical problem formulation for finding a set of TE-aware shortest paths for the given network as an integer linear program, and we propose a simple heuristic for solving large instances of the problem. Then we explore the usage of our proposed algorithm for the integrated TE method in IP-over-WDM networks. The proposed algorithm is evaluated through simulations in IP networks as well as in IP-over-WDM networks.

  12. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  13. Using return on investment to maximize conservation effectiveness in Argentine grasslands.

    PubMed

    Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James

    2010-12-07

    The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land ("minimize cost"), maximizing conservation benefit regardless of cost ("maximize benefit"), and maximizing conservation benefit per dollar ("return on investment"). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy.

  14. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  15. Robustness of mission plans for unmanned aircraft

    NASA Astrophysics Data System (ADS)

    Niendorf, Moritz

    This thesis studies the robustness of optimal mission plans for unmanned aircraft. Mission planning typically involves tactical planning and path planning. Tactical planning refers to task scheduling and in multi aircraft scenarios also includes establishing a communication topology. Path planning refers to computing a feasible and collision-free trajectory. For a prototypical mission planning problem, the traveling salesman problem on a weighted graph, the robustness of an optimal tour is analyzed with respect to changes to the edge costs. Specifically, the stability region of an optimal tour is obtained, i.e., the set of all edge cost perturbations for which that tour is optimal. The exact stability region of solutions to variants of the traveling salesman problems is obtained from a linear programming relaxation of an auxiliary problem. Edge cost tolerances and edge criticalities are derived from the stability region. For Euclidean traveling salesman problems, robustness with respect to perturbations to vertex locations is considered and safe radii and vertex criticalities are introduced. For weighted-sum multi-objective problems, stability regions with respect to changes in the objectives, weights, and simultaneous changes are given. Most critical weight perturbations are derived. Computing exact stability regions is intractable for large instances. Therefore, tractable approximations are desirable. The stability region of solutions to relaxations of the traveling salesman problem give under approximations and sets of tours give over approximations. The application of these results to the two-neighborhood and the minimum 1-tree relaxation are discussed. Bounds on edge cost tolerances and approximate criticalities are obtainable likewise. A minimum spanning tree is an optimal communication topology for minimizing the cumulative transmission power in multi aircraft missions. The stability region of a minimum spanning tree is given and tolerances, stability balls, and criticalities are derived. This analysis is extended to Euclidean minimum spanning trees. This thesis aims at enabling increased mission performance by providing means of assessing the robustness and optimality of a mission and methods for identifying critical elements. Examples of the application to mission planning in contested environments, cargo aircraft mission planning, multi-objective mission planning, and planning optimal communication topologies for teams of unmanned aircraft are given.

  16. Optimal ordering quantities for substitutable deteriorating items under joint replenishment with cost of substitution

    NASA Astrophysics Data System (ADS)

    Mishra, Vinod Kumar

    2017-09-01

    In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.

  17. Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.

  18. Cost optimization of reinforced concrete cantilever retaining walls under seismic loading using a biogeography-based optimization algorithm with Levy flights

    NASA Astrophysics Data System (ADS)

    Aydogdu, Ibrahim

    2017-03-01

    In this article, a new version of a biogeography-based optimization algorithm with Levy flight distribution (LFBBO) is introduced and used for the optimum design of reinforced concrete cantilever retaining walls under seismic loading. The cost of the wall is taken as an objective function, which is minimized under the constraints implemented by the American Concrete Institute (ACI 318-05) design code and geometric limitations. The influence of peak ground acceleration (PGA) on optimal cost is also investigated. The solution of the problem is attained by the LFBBO algorithm, which is developed by adding Levy flight distribution to the mutation part of the biogeography-based optimization (BBO) algorithm. Five design examples, of which two are used in literature studies, are optimized in the study. The results are compared to test the performance of the LFBBO and BBO algorithms, to determine the influence of the seismic load and PGA on the optimal cost of the wall.

  19. Robust optimization for nonlinear time-delay dynamical system of dha regulon with cost sensitivity constraint in batch culture

    NASA Astrophysics Data System (ADS)

    Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong

    2016-09-01

    Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.

  20. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  1. Reliable Adaptive Data Aggregation Route Strategy for a Trade-off between Energy and Lifetime in WSNs

    PubMed Central

    Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue

    2014-01-01

    Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944

  2. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  3. A Reserve-based Method for Mitigating the Impact of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Krad, Ibrahim

    The fundamental operating paradigm of today's power systems is undergoing a significant shift. This is partially motivated by the increased desire for incorporating variable renewable energy resources into generation portfolios. While these generating technologies offer clean energy at zero marginal cost, i.e. no fuel costs, they also offer unique operating challenges for system operators. Perhaps the biggest operating challenge these resources introduce is accommodating their intermittent fuel source availability. For this reason, these generators increase the system-wide variability and uncertainty. As a result, system operators are revisiting traditional operating strategies to more efficiently incorporate these generation resources to maximize the benefit they provide while minimizing the challenges they introduce. One way system operators have accounted for system variability and uncertainty is through the use of operating reserves. Operating reserves can be simplified as excess capacity kept online during real time operations to help accommodate unforeseen fluctuations in demand. With new generation resources, a new class of operating reserves has emerged that is generally known as flexibility, or ramping, reserves. This new reserve class is meant to better position systems to mitigate severe ramping in the net load profile. The best way to define this new requirement is still under investigation. Typical requirement definitions focus on the additional uncertainty introduced by variable generation and there is room for improvement regarding explicit consideration for the variability they introduce. An exogenous reserve modification method is introduced in this report that can improve system reliability with minimal impacts on total system wide production costs. Another potential solution to this problem is to formulate the problem as a stochastic programming problem. The unit commitment and economic dispatch problems are typically formulated as deterministic problems due to fast solution times and the solutions being sufficient for operations. Improvements in technical computing hardware have reignited interest in stochastic modeling. The variability of wind and solar naturally lends itself to stochastic modeling. The use of explicit reserve requirements in stochastic models is an area of interest for power system researchers. This report introduces a new reserve modification implementation based on previous results to be used in a stochastic modeling framework. With technological improvements in distributed generation technologies, microgrids are currently being researched and implemented. Microgrids are small power systems that have the ability to serve their demand with their own generation resources and may have a connection to a larger power system. As battery technologies improve, they are becoming a more viable option in these distributed power systems and research is necessary to determine the most efficient way to utilize them. This report will investigate several unique operating strategies for batteries in small power systems and analyze their benefits. These new operating strategies will help reduce operating costs and improve system reliability.

  4. Calculating Optimum sowing factor: A tool to evaluate sowing strategies and minimize seedling production cost

    Treesearch

    Eric van Steenis

    2013-01-01

    This paper illustrates how to use an excel spreadsheet as a decision-making tool to determine optimum sowing factor to minimize seedling production cost. Factors incorporated into the spreadsheet calculations include germination percentage, seeder accuracy, cost per seed, cavities per block, costs of handling, thinning, and transplanting labor, and more. In addition to...

  5. On the relationship between parallel computation and graph embedding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, A.K.

    1989-01-01

    The problem of efficiently simulating an algorithm designed for an n-processor parallel machine G on an m-processor parallel machine H with n > m arises when parallel algorithms designed for an ideal size machine are simulated on existing machines which are of a fixed size. The author studies this problem when every processor of H takes over the function of a number of processors in G, and he phrases the simulation problem as a graph embedding problem. New embeddings presented address relevant issues arising from the parallel computation environment. The main focus centers around embedding complete binary trees into smaller-sizedmore » binary trees, butterflies, and hypercubes. He also considers simultaneous embeddings of r source machines into a single hypercube. Constant factors play a crucial role in his embeddings since they are not only important in practice but also lead to interesting theoretical problems. All of his embeddings minimize dilation and load, which are the conventional cost measures in graph embeddings and determine the maximum amount of time required to simulate one step of G on H. His embeddings also optimize a new cost measure called ({alpha},{beta})-utilization which characterizes how evenly the processors of H are used by the processors of G. Ideally, the utilization should be balanced (i.e., every processor of H simulates at most (n/m) processors of G) and the ({alpha},{beta})-utilization measures how far off from a balanced utilization the embedding is. He presents embeddings for the situation when some processors of G have different capabilities (e.g. memory or I/O) than others and the processors with different capabilities are to be distributed uniformly among the processors of H. Placing such conditions on an embedding results in an increase in some of the cost measures.« less

  6. Minimal investment risk of a portfolio optimization problem with budget and investment concentration constraints

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-02-01

    In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.

  7. Efficient distribution of toy products using ant colony optimization algorithm

    NASA Astrophysics Data System (ADS)

    Hidayat, S.; Nurpraja, C. A.

    2017-12-01

    CV Atham Toys (CVAT) produces wooden toys and furniture, comprises 13 small and medium industries. CVAT always attempt to deliver customer orders on time but delivery costs are high. This is because of inadequate infrastructure such that delivery routes are long, car maintenance costs are high, while fuel subsidy by the government is still temporary. This study seeks to minimize the cost of product distribution based on the shortest route using one of five Ant Colony Optimization (ACO) algorithms to solve the Vehicle Routing Problem (VRP). This study concludes that the best of the five is the Ant Colony System (ACS) algorithm. The best route in 1st week gave a total distance of 124.11 km at a cost of Rp 66,703.75. The 2nd week route gave a total distance of 132.27 km at a cost of Rp 71,095.13. The 3rd week best route gave a total distance of 122.70 km with a cost of Rp 65,951.25. While the 4th week gave a total distance of 132.27 km at a cost of Rp 74,083.63. Prior to this study there was no effort to calculate these figures.

  8. Enabling Controlling Complex Networks with Local Topological Information.

    PubMed

    Li, Guoqi; Deng, Lei; Xiao, Gaoxi; Tang, Pei; Wen, Changyun; Hu, Wuhua; Pei, Jing; Shi, Luping; Stanley, H Eugene

    2018-03-15

    Complex networks characterize the nature of internal/external interactions in real-world systems including social, economic, biological, ecological, and technological networks. Two issues keep as obstacles to fulfilling control of large-scale networks: structural controllability which describes the ability to guide a dynamical system from any initial state to any desired final state in finite time, with a suitable choice of inputs; and optimal control, which is a typical control approach to minimize the cost for driving the network to a predefined state with a given number of control inputs. For large complex networks without global information of network topology, both problems remain essentially open. Here we combine graph theory and control theory for tackling the two problems in one go, using only local network topology information. For the structural controllability problem, a distributed local-game matching method is proposed, where every node plays a simple Bayesian game with local information and local interactions with adjacent nodes, ensuring a suboptimal solution at a linear complexity. Starring from any structural controllability solution, a minimizing longest control path method can efficiently reach a good solution for the optimal control in large networks. Our results provide solutions for distributed complex network control and demonstrate a way to link the structural controllability and optimal control together.

  9. Optimal design of groundwater remediation system using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun

    2014-11-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.

  10. The Costs of Legislated Minimal Competency Requirements. A background paper prepared for the Minimal Cometency Workshops sponsored by the Education Commission of the States and the National Institute of Education.

    ERIC Educational Resources Information Center

    Anderson, Barry D.

    Little is known about the costs of setting up and implementing legislated minimal competency testing (MCT). To estimate the financial obstacles which lie between the idea and its implementation, MCT requirements are viewed from two perspectives. The first, government regulation, views legislated minimal competency requirements as an attempt by the…

  11. Electrical Resistivity Tomography using a finite element based BFGS algorithm with algebraic multigrid preconditioning

    NASA Astrophysics Data System (ADS)

    Codd, A. L.; Gross, L.

    2018-03-01

    We present a new inversion method for Electrical Resistivity Tomography which, in contrast to established approaches, minimizes the cost function prior to finite element discretization for the unknown electric conductivity and electric potential. Minimization is performed with the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) in an appropriate function space. BFGS is self-preconditioning and avoids construction of the dense Hessian which is the major obstacle to solving large 3-D problems using parallel computers. In addition to the forward problem predicting the measurement from the injected current, the so-called adjoint problem also needs to be solved. For this problem a virtual current is injected through the measurement electrodes and an adjoint electric potential is obtained. The magnitude of the injected virtual current is equal to the misfit at the measurement electrodes. This new approach has the advantage that the solution process of the optimization problem remains independent to the meshes used for discretization and allows for mesh adaptation during inversion. Computation time is reduced by using superposition of pole loads for the forward and adjoint problems. A smoothed aggregation algebraic multigrid (AMG) preconditioned conjugate gradient is applied to construct the potentials for a given electric conductivity estimate and for constructing a first level BFGS preconditioner. Through the additional reuse of AMG operators and coarse grid solvers inversion time for large 3-D problems can be reduced further. We apply our new inversion method to synthetic survey data created by the resistivity profile representing the characteristics of subsurface fluid injection. We further test it on data obtained from a 2-D surface electrode survey on Heron Island, a small tropical island off the east coast of central Queensland, Australia.

  12. A cost-function approach to rival penalized competitive learning (RPCL).

    PubMed

    Ma, Jinwen; Wang, Taijun

    2006-08-01

    Rival penalized competitive learning (RPCL) has been shown to be a useful tool for clustering on a set of sample data in which the number of clusters is unknown. However, the RPCL algorithm was proposed heuristically and is still in lack of a mathematical theory to describe its convergence behavior. In order to solve the convergence problem, we investigate it via a cost-function approach. By theoretical analysis, we prove that a general form of RPCL, called distance-sensitive RPCL (DSRPCL), is associated with the minimization of a cost function on the weight vectors of a competitive learning network. As a DSRPCL process decreases the cost to a local minimum, a number of weight vectors eventually fall into a hypersphere surrounding the sample data, while the other weight vectors diverge to infinity. Moreover, it is shown by the theoretical analysis and simulation experiments that if the cost reduces into the global minimum, a correct number of weight vectors is automatically selected and located around the centers of the actual clusters, respectively. Finally, we apply the DSRPCL algorithms to unsupervised color image segmentation and classification of the wine data.

  13. Economic impact of minimally invasive lumbar surgery.

    PubMed

    Hofstetter, Christoph P; Hofer, Anna S; Wang, Michael Y

    2015-03-18

    Cost effectiveness has been demonstrated for traditional lumbar discectomy, lumbar laminectomy as well as for instrumented and noninstrumented arthrodesis. While emerging evidence suggests that minimally invasive spine surgery reduces morbidity, duration of hospitalization, and accelerates return to activites of daily living, data regarding cost effectiveness of these novel techniques is limited. The current study analyzes all available data on minimally invasive techniques for lumbar discectomy, decompression, short-segment fusion and deformity surgery. In general, minimally invasive spine procedures appear to hold promise in quicker patient recovery times and earlier return to work. Thus, minimally invasive lumbar spine surgery appears to have the potential to be a cost-effective intervention. Moreover, novel less invasive procedures are less destabilizing and may therefore be utilized in certain indications that traditionally required arthrodesis procedures. However, there is a lack of studies analyzing the economic impact of minimally invasive spine surgery. Future studies are necessary to confirm the durability and further define indications for minimally invasive lumbar spine procedures.

  14. Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ku, R. T.

    1972-01-01

    The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.

  15. Optimization of location routing inventory problem with transshipment

    NASA Astrophysics Data System (ADS)

    Ghani, Nor Edayu Abd; Shariff, S. Sarifah Radiah; Zahari, Siti Meriam

    2015-05-01

    Location Routing Inventory Problem (LRIP) is a collaboration of the three components in the supply chain. It is confined by location-allocation, vehicle routing and inventory management. The aim of the study is to minimize the total system cost in the supply chain. Transshipment is introduced in order to allow the products to be shipped to a customer who experiences a shortage, either directly from the supplier or from another customer. In the study, LRIP is introduced with the transshipment (LRIPT) and customers act as the transshipment points. We select the transshipment point by using the p-center and we present the results in two divisions of cases. Based on the analysis, the results indicated that LRIPT performed well compared to LRIP.

  16. Optimal Diet Planning for Eczema Patient Using Integer Programming

    NASA Astrophysics Data System (ADS)

    Zhen Sheng, Low; Sufahani, Suliadi

    2018-04-01

    Human diet planning is conducted by choosing appropriate food items that fulfill the nutritional requirements into the diet formulation. This paper discusses the application of integer programming to build the mathematical model of diet planning for eczema patients. The model developed is used to solve the diet problem of eczema patients from young age group. The integer programming is a scientific approach to select suitable food items, which seeks to minimize the costs, under conditions of meeting desired nutrient quantities, avoiding food allergens and getting certain foods into the diet that brings relief to the eczema conditions. This paper illustrates that the integer programming approach able to produce the optimal and feasible solution to deal with the diet problem of eczema patient.

  17. A variable-gain output feedback control design approach

    NASA Technical Reports Server (NTRS)

    Haylo, Nesim

    1989-01-01

    A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.

  18. Epidemic spreading on random surfer networks with optimal interaction radius

    NASA Astrophysics Data System (ADS)

    Feng, Yun; Ding, Li; Hu, Ping

    2018-03-01

    In this paper, the optimal control problem of epidemic spreading on random surfer heterogeneous networks is considered. An epidemic spreading model is established according to the classification of individual's initial interaction radii. Then, a control strategy is proposed based on adjusting individual's interaction radii. The global stability of the disease free and endemic equilibrium of the model is investigated. We prove that an optimal solution exists for the optimal control problem and the explicit form of which is presented. Numerical simulations are conducted to verify the correctness of the theoretical results. It is proved that the optimal control strategy is effective to minimize the density of infected individuals and the cost associated with the adjustment of interaction radii.

  19. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  20. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics.

    PubMed

    Frisk, Mikael; Jonsson, Annie; Sellman, Stefan; Flisberg, Patrik; Rönnqvist, Mikael; Wennergren, Uno

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact.

  1. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics

    PubMed Central

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact. PMID:29513704

  2. Low Cost Missions Operations on NASA Deep Space Missions

    NASA Astrophysics Data System (ADS)

    Barnes, R. J.; Kusnierkiewicz, D. J.; Bowman, A.; Harvey, R.; Ossing, D.; Eichstedt, J.

    2014-12-01

    The ability to lower mission operations costs on any long duration mission depends on a number of factors; the opportunities for science, the flight trajectory, and the cruise phase environment, among others. Many deep space missions employ long cruises to their final destination with minimal science activities along the way; others may perform science observations on a near-continuous basis. This paper discusses approaches employed by two NASA missions implemented by the Johns Hopkins University Applied Physics Laboratory (JHU/APL) to minimize mission operations costs without compromising mission success: the New Horizons mission to Pluto, and the Solar Terrestrial Relations Observatories (STEREO). The New Horizons spacecraft launched in January 2006 for an encounter with the Pluto system.The spacecraft trajectory required no deterministic on-board delta-V, and so the mission ops team then settled in for the rest of its 9.5-year cruise. The spacecraft has spent much of its cruise phase in a "hibernation" mode, which has enabled the spacecraft to be maintained with a small operations team, and minimized the contact time required from the NASA Deep Space Network. The STEREO mission is comprised of two three-axis stabilized sun-staring spacecraft in heliocentric orbit at a distance of 1 AU from the sun. The spacecraft were launched in October 2006. The STEREO instruments operate in a "decoupled" mode from the spacecraft, and from each other. Since STEREO operations are largely routine, unattended ground station contact operations were implemented early in the mission. Commands flow from the MOC to be uplinked, and the data recorded on-board is downlinked and relayed back to the MOC. Tools run in the MOC to assess the health and performance of ground system components. Alerts are generated and personnel are notified of any problems. Spacecraft telemetry is similarly monitored and alarmed, thus ensuring safe, reliable, low cost operations.

  3. Using random forest for reliable classification and cost-sensitive learning for medical diagnosis.

    PubMed

    Yang, Fan; Wang, Hua-zhen; Mi, Hong; Lin, Cheng-de; Cai, Wei-wen

    2009-01-30

    Most machine-learning classifiers output label predictions for new instances without indicating how reliable the predictions are. The applicability of these classifiers is limited in critical domains where incorrect predictions have serious consequences, like medical diagnosis. Further, the default assumption of equal misclassification costs is most likely violated in medical diagnosis. In this paper, we present a modified random forest classifier which is incorporated into the conformal predictor scheme. A conformal predictor is a transductive learning scheme, using Kolmogorov complexity to test the randomness of a particular sample with respect to the training sets. Our method show well-calibrated property that the performance can be set prior to classification and the accurate rate is exactly equal to the predefined confidence level. Further, to address the cost sensitive problem, we extend our method to a label-conditional predictor which takes into account different costs for misclassifications in different class and allows different confidence level to be specified for each class. Intensive experiments on benchmark datasets and real world applications show the resultant classifier is well-calibrated and able to control the specific risk of different class. The method of using RF outlier measure to design a nonconformity measure benefits the resultant predictor. Further, a label-conditional classifier is developed and turn to be an alternative approach to the cost sensitive learning problem that relies on label-wise predefined confidence level. The target of minimizing the risk of misclassification is achieved by specifying the different confidence level for different class.

  4. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.

  5. Control algorithms for dynamic attenuators.

    PubMed

    Hsieh, Scott S; Pelc, Norbert J

    2014-06-01

    The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.

  6. Minimization of municipal solid waste transportation route in West Jakarta using Tabu Search method

    NASA Astrophysics Data System (ADS)

    Chaerul, M.; Mulananda, A. M.

    2018-04-01

    Indonesia still adopts the concept of collect-haul-dispose for municipal solid waste handling and it leads to the queue of the waste trucks at final disposal site (TPA). The study aims to minimize the total distance of waste transportation system by applying a Transshipment model. In this case, analogous of transshipment point is a compaction facility (SPA). Small capacity of trucks collects the waste from waste temporary collection points (TPS) to the compaction facility which located near the waste generator. After compacted, the waste is transported using big capacity of trucks to the final disposal site which is located far away from city. Problem related with the waste transportation can be solved using Vehicle Routing Problem (VRP). In this study, the shortest distance of route from truck pool to TPS, TPS to SPA, and SPA to TPA was determined by using meta-heuristic methods, namely Tabu Search 2 Phases. TPS studied is the container type with total 43 units throughout the West Jakarta City with 38 units of Armroll truck with capacity of 10 m3 each. The result determines the assignment of each truck from the pool to the selected TPS, SPA and TPA with the total minimum distance of 2,675.3 KM. The minimum distance causing the total cost for waste transportation to be spent by the government also becomes minimal.

  7. Adaptive Transmission Planning: Implementing a New Paradigm for Managing Economic Risks in Grid Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Benjamin F.; Xu, Qingyu; Ho, Jonathan

    The problem of whether, where, when, and what types of transmission facilities to build in terms of minimizing costs and maximizing net economic benefits has been a challenge for the power industry from the beginning-ever since Thomas Edison debated whether to create longer dc distribution lines (with their high losses) or build new power stations in expanding his urban markets. Today's planning decisions are far more complex, as grids cover the continent and new transmission, generation, and demand-side technologies emerge.

  8. Fly-by-Wireless Update

    NASA Technical Reports Server (NTRS)

    Studor, George

    2010-01-01

    The presentation reviews what is meant by the term 'fly-by-wireless', common problems and motivation, provides recent examples, and examines NASA's future and basis for collaboration. The vision is to minimize cables and connectors and increase functionality across the aerospace industry by providing reliable, lower cost, modular, and higher performance alternatives to wired data connectivity to benefit the entire vehicle/program life-cycle. Focus areas are system engineering and integration methods to reduce cables and connectors, vehicle provisions for modularity and accessibility, and a 'tool box' of alternatives to wired connectivity.

  9. Experimental and Theoretical Results in Output-Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Devasia, Santosh

    1996-01-01

    In this paper we study the optimal redesign of output trajectory for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectories that achieve the required output may cause excessive vibrations in the structure. A trade-off is then required between tracking and vibrations reduction. We pose and solve this problem as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  10. Error minimizing algorithms for nearest eighbor classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Hush, Don; Zimmer, G. Beate

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. Wemore » use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.« less

  11. Improved parallel data partitioning by nested dissection with applications to information retrieval.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar

    The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less

  12. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  13. Efficient constraint handling in electromagnetism-like algorithm for traveling salesman problem with time windows.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  14. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  15. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method

    NASA Astrophysics Data System (ADS)

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.

  16. A fast and low-cost microfabrication approach for six types of thermoplastic substrates with reduced feature size and minimized bulges using sacrificial layer assisted laser engraving.

    PubMed

    Gu, Longjun; Yu, Guodong; Li, Cheuk-Wing

    2018-01-02

    Since polydimethylsiloxane (PDMS) is notorious for its severe sorption to biological compounds and even nanoparticles, thermoplastics become a promising substrate for microdevices. Although CO 2 laser engraving is an efficient method for thermoplastic device fabrication, it accompanies with poor bonding issues due to severe bulging and large feature size determined by the diameter of laser beam. In this study, a low-cost microfabrication method is proposed by reversibly sealing a 1 mm thick polymethylmethacrylate (PMMA) over an engraving substrate to reduce channel feature size and minimize bulges of laser engraved channels. PMMA, polycarbonate (PC), polystyrene (PS), perfluoroalkoxy alkane (PFA), cyclic-olefin polymers (COP) and polylactic acid (PLA) were found compatible with this sacrificial layer assisted laser engraving technique. Microchannel width as small as ∼40 μm was attainable by a laser beam that was 5 times larger in diameter. Bulging height was significantly reduced to less 5 μm for most substrates, which facilitated leak proof device bonding without channel deformation. Microdevices with high aspect ratio channels were prepared to demonstrate the applicability of this microfabrication method. We believe this fast and low-cost fabrication approach for thermoplastics will be of interest to researchers who have encountered problem with polydimethylsiloxane based microdevices in their applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Cooperative data dissemination to mission sites

    NASA Astrophysics Data System (ADS)

    Chen, Fangfei; Johnson, Matthew P.; Bar-Noy, Amotz; La Porta, Thomas F.

    2010-04-01

    Timely dissemination of information to mobile users is vital in many applications. In a critical situation, no network infrastructure may be available for use in dissemination, over and above the on-board storage capability of the mobile users themselves. We consider the following specialized content distribution application: a group of users equipped with wireless devices build an ad hoc network in order cooperatively to retrieve information from certain regions (the mission sites). Each user requires access to some set of information items originating from sources lying within a region. Each user desires low-latency access to its desired data items, upon request (i.e., when pulled). In order to minimize average response time, we allow users to pull data either directly from sources or, when possible, from other nearby users who have already pulled, and continue to carry, the desired data items. That is, we allow for data to be pushed to one user and then pulled by one or more additional users. The total latency experienced by a user vis-vis a certain data item is then in general a combination of the push delay and the pull delay. We assume each delay time is a function of the hop distance between the pair of points in question. Our goal in this paper is to assign data to mobile users, in order to minimize the total cost and the average latency experienced by all the users. In a static setting, we solve this problem in two different schemes, one of which is easy to solve but wasteful, one of which relates to NP-hard problems but is less so. Then in a dynamic setting, we adapt the algorithm for the static setting and develop a new algorithm with respect to users' gradual arrival. In the end we show a trade-off can be made between minimizing the cost and latency.

  18. Trading strategies for distribution company with stochastic distributed energy resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chunyu; Wang, Qi; Wang, Jianhui

    2016-09-01

    This paper proposes a methodology to address the trading strategies of a proactive distribution company (PDISCO) engaged in the transmission-level (TL) markets. A one-leader multi-follower bilevel model is presented to formulate the gaming framework between the PDISCO and markets. The lower-level (LL) problems include the TL day-ahead market and scenario-based real-time markets, respectively with the objectives of maximizing social welfare and minimizing operation cost. The upper-level (UL) problem is to maximize the PDISCO’s profit across these markets. The PDISCO’s strategic offers/bids interactively influence the outcomes of each market. Since the LL problems are linear and convex, while the UL problemmore » is non-linear and non-convex, an equivalent primal–dual approach is used to reformulate this bilevel model to a solvable mathematical program with equilibrium constraints (MPEC). The effectiveness of the proposed model is verified by case studies.« less

  19. Avoiding potential problems when selling accounts receivable.

    PubMed

    Ayers, D H; Kincaid, T J

    1996-05-01

    Accounts receivable financing is a potential tool for managing a provider organization's working capital needs. But before entering into a financing agreement, organizations need to consider and take steps to avoid serious problems that can arise from participation in an accounts receivable financing program. For example, the purchaser may cease purchasing the receivables, leaving the organization without funding needed for operations. Or, the financing program may be inordinately complex and unnecessarily costly to the organization. Sometimes the organization itself may fail to comply with the terms of the agreement under which the accounts receivable were sold, thus necessitating that restitution be made to the purchaser or provoking charges of fraud. These potential problems should be addressed as early as possible--before an organization enters into an accounts receivable financing program--in order to minimize time, effort, and expanse and maximize the benefits of the financing agreement.

  20. A biologically inspired network design model.

    PubMed

    Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T S; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I; Sirakoulis, Georgios Ch; Mahadevan, Sankaran

    2015-06-04

    A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach.

  1. A Biologically Inspired Network Design Model

    PubMed Central

    Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T.S.; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I.; Sirakoulis, Georgios Ch.; Mahadevan, Sankaran

    2015-01-01

    A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach. PMID:26041508

  2. Mining Stable Roles in RBAC

    NASA Astrophysics Data System (ADS)

    Colantonio, Alessandro; di Pietro, Roberto; Ocello, Alberto; Verde, Nino Vincenzo

    In this paper we address the problem of generating a candidate role-set for an RBAC configuration that enjoys the following two key features: it minimizes the administration cost; and, it is a stable candidate role-set. To achieve these goals, we implement a three steps methodology: first, we associate a weight to roles; second, we identify and remove the user-permission assignments that cannot belong to a role that have a weight exceeding a given threshold; third, we restrict the problem of finding a candidate role-set for the given system configuration using only the user-permission assignments that have not been removed in the second step—that is, user-permission assignments that belong to roles with a weight exceeding the given threshold. We formally show—proof of our results are rooted in graph theory—that this methodology achieves the intended goals. Finally, we discuss practical applications of our approach to the role mining problem.

  3. Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.

    PubMed

    Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric

    2018-03-01

    Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.

  4. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  5. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  6. A Generalized Formulation of Demand Response under Market Environments

    NASA Astrophysics Data System (ADS)

    Nguyen, Minh Y.; Nguyen, Duc M.

    2015-06-01

    This paper presents a generalized formulation of Demand Response (DR) under deregulated electricity markets. The problem is scheduling and controls the consumption of electrical loads according to the market price to minimize the energy cost over a day. Taking into account the modeling of customers' comfort (i.e., preference), the formulation can be applied to various types of loads including what was traditionally classified as critical loads (e.g., air conditioning, lights). The proposed DR scheme is based on Dynamic Programming (DP) framework and solved by DP backward algorithm in which the stochastic optimization is used to treat the uncertainty, if any occurred in the problem. The proposed formulation is examined with the DR problem of different loads, including Heat Ventilation and Air Conditioning (HVAC), Electric Vehicles (EVs) and a newly DR on the water supply systems of commercial buildings. The result of simulation shows significant saving can be achieved in comparison with their traditional (On/Off) scheme.

  7. Taking Costs and Diagnostic Test Accuracy into Account When Designing Prevalence Studies: An Application to Childhood Tuberculosis Prevalence.

    PubMed

    Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence

    2017-11-01

    When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.

  8. A Multiobjective Optimization Framework for Online Stochastic Optimal Control in Hybrid Electric Vehicles

    DOE PAGES

    Malikopoulos, Andreas

    2015-01-01

    The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less

  9. A Decentralized Scheduling Policy for a Dynamically Reconfigurable Production System

    NASA Astrophysics Data System (ADS)

    Giordani, Stefano; Lujak, Marin; Martinelli, Francesco

    In this paper, the static layout of a traditional multi-machine factory producing a set of distinct goods is integrated with a set of mobile production units - robots. The robots dynamically change their work position to increment the product rate of the different typologies of products in respect to the fluctuations of the demands and production costs during a given time horizon. Assuming that the planning time horizon is subdivided into a finite number of time periods, this particularly flexible layout requires the definition and the solution of a complex scheduling problem, involving for each period of the planning time horizon, the determination of the position of the robots, i.e., the assignment to the respective tasks in order to minimize production costs given the product demand rates during the planning time horizon.

  10. A survey of gas-side fouling in industrial heat-transfer equipment

    NASA Astrophysics Data System (ADS)

    Marner, W. J.; Suitor, J. W.

    1983-11-01

    Gas-side fouling and corrosion problems occur in all of the energy intensive industries including the chemical, petroleum, primary metals, pulp and paper, glass, cement, foodstuffs, and textile industries. Topics of major interest include: (1) heat exchanger design procedures for gas-side fouling service; (2) gas-side fouling factors which are presently available; (3) startup and shutdown procedures used to minimize the effects of gas-side fouling; (4) gas-side fouling prevention, mitigation, and accommodation techniques; (5) economic impact of gas-side fouling on capital costs, maintenance costs, loss of production, and energy losses; and (6) miscellaneous considerations related to gas-side fouling. The present state-of-the-art for industrial gas-side fouling is summarized by a list of recommendations for further work in this area.

  11. A survey of gas-side fouling in industrial heat-transfer equipment

    NASA Technical Reports Server (NTRS)

    Marner, W. J.; Suitor, J. W.

    1983-01-01

    Gas-side fouling and corrosion problems occur in all of the energy intensive industries including the chemical, petroleum, primary metals, pulp and paper, glass, cement, foodstuffs, and textile industries. Topics of major interest include: (1) heat exchanger design procedures for gas-side fouling service; (2) gas-side fouling factors which are presently available; (3) startup and shutdown procedures used to minimize the effects of gas-side fouling; (4) gas-side fouling prevention, mitigation, and accommodation techniques; (5) economic impact of gas-side fouling on capital costs, maintenance costs, loss of production, and energy losses; and (6) miscellaneous considerations related to gas-side fouling. The present state-of-the-art for industrial gas-side fouling is summarized by a list of recommendations for further work in this area.

  12. Optimization of municipal solid waste collection and transportation routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Swapan, E-mail: swapan2009sajal@gmail.com; Bhattacharyya, Bidyut Kr., E-mail: bidyut53@yahoo.co.in

    2015-09-15

    Graphical abstract: Display Omitted - Highlights: • Profitable integrated solid waste management system. • Optimal municipal waste collection scheme between the sources and waste collection centres. • Optimal path calculation between waste collection centres and transfer stations. • Optimal waste routing between the transfer stations and processing plants. - Abstract: Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scattermore » throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length.« less

  13. Optimal aeroassisted coplanar orbital transfer using an energy model

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1989-01-01

    The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.

  14. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  15. Neural networks for feedback feedforward nonlinear control systems.

    PubMed

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  16. Developing cross entropy genetic algorithm for solving Two-Dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP)

    NASA Astrophysics Data System (ADS)

    Paramestha, D. L.; Santosa, B.

    2018-04-01

    Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.

  17. Hybrid Exploration Agent Platform and Sensor Web System

    NASA Technical Reports Server (NTRS)

    Stoffel, A. William; VanSteenberg, Michael E.

    2004-01-01

    A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.

  18. Current Developments in Cost Accounting/Performance Measuring Systems for Implementing Advanced Manufacturing Technology

    DTIC Science & Technology

    1989-11-01

    incomplete accounting of benefits, few strategic projects will * be adopted. Nanni , et al [21], provide similar discussion regarding a benefit analysis in...management tends to ignore the fact that minimizing costs within departments does not guarantee minimization of overall costs ( Nanni (21]). Sullivan, et...changes in the manufacturing environment. The author also remarks that these cost systems need to be modified or replaced by entirely new systems

  19. Optimization for Guitar Fingering on Single Notes

    NASA Astrophysics Data System (ADS)

    Itoh, Masaru; Hayashida, Takumi

    This paper presents an optimization method for guitar fingering. The fingering is to determine a unique combination of string, fret and finger corresponding to the note. The method aims to generate the best fingering pattern for guitar robots rather than beginners. Furthermore, it can be applied to any musical score on single notes. A fingering action can be decomposed into three motions, that is, a motion of press string, release string and move fretting hand. The cost for moving the hand is estimated on the basis of Manhattan distance which is the sum of distances along fret and string directions. The objective is to minimize the total fingering costs, subject to fret, string and finger constraints. As a sequence of notes on the score forms a line on time series, the optimization for guitar fingering can be resolved into a multistage decision problem. Dynamic programming is exceedingly effective to solve such a problem. A level concept is introduced into rendering states so as to make multiple DP solutions lead a unique one among the DP backward processes. For example, if two fingerings have the same value of cost at different states on a stage, then the low position would be taken precedence over the high position, and the index finger would be over the middle finger.

  20. Design and development of a low-cost biphasic charge-balanced functional electric stimulator and its clinical validation.

    PubMed

    Shendkar, Chandrashekhar; Lenka, Prasanna K; Biswas, Abhishek; Kumar, Ratnesh; Mahadevappa, Manjunatha

    2015-10-01

    Functional electric stimulators that produce near-ideal, charge-balanced biphasic stimulation waveforms with interphase delay are considered safer and more efficacious than conventional stimulators. An indigenously designed, low-cost, portable FES device named InStim is developed. It features a charge-balanced biphasic single channel. The authors present the complete design, mathematical analysis of the circuit and the clinical evaluation of the device. The developed circuit was tested on stroke patients affected by foot drop problems. It was tested both under laboratory conditions and in clinical settings. The key building blocks of this circuit are low dropout regulators, a DC-DC voltage booster and a single high-power current source OP-Amp with current-limiting capabilities. This allows the device to deliver high-voltage, constant current, biphasic pulses without the use of a bulky step-up transformer. The advantages of the proposed design over the currently existing devices include improved safety features (zero DC current, current-limiting mechanism and safe pulses), waveform morphology that causes less muscle fatigue, cost-effectiveness and compact power-efficient circuit design with minimal components. The device is also capable of producing appropriate ankle dorsiflexion in patients having foot drop problems of various Medical Research Council scale grades.

  1. Active Learning Strategies for Phenotypic Profiling of High-Content Screens.

    PubMed

    Smith, Kevin; Horvath, Peter

    2014-06-01

    High-content screening is a powerful method to discover new drugs and carry out basic biological research. Increasingly, high-content screens have come to rely on supervised machine learning (SML) to perform automatic phenotypic classification as an essential step of the analysis. However, this comes at a cost, namely, the labeled examples required to train the predictive model. Classification performance increases with the number of labeled examples, and because labeling examples demands time from an expert, the training process represents a significant time investment. Active learning strategies attempt to overcome this bottleneck by presenting the most relevant examples to the annotator, thereby achieving high accuracy while minimizing the cost of obtaining labeled data. In this article, we investigate the impact of active learning on single-cell-based phenotype recognition, using data from three large-scale RNA interference high-content screens representing diverse phenotypic profiling problems. We consider several combinations of active learning strategies and popular SML methods. Our results show that active learning significantly reduces the time cost and can be used to reveal the same phenotypic targets identified using SML. We also identify combinations of active learning strategies and SML methods which perform better than others on the phenotypic profiling problems we studied. © 2014 Society for Laboratory Automation and Screening.

  2. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    NASA Astrophysics Data System (ADS)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  3. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  4. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  5. A new multi-objective optimization model for preventive maintenance and replacement scheduling of multi-component systems

    NASA Astrophysics Data System (ADS)

    Moghaddam, Kamran S.; Usher, John S.

    2011-07-01

    In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.

  6. Biokinetic model-based multi-objective optimization of Dunaliella tertiolecta cultivation using elitist non-dominated sorting genetic algorithm with inheritance.

    PubMed

    Sinha, Snehal K; Kumar, Mithilesh; Guria, Chandan; Kumar, Anup; Banerjee, Chiranjib

    2017-10-01

    Algal model based multi-objective optimization using elitist non-dominated sorting genetic algorithm with inheritance was carried out for batch cultivation of Dunaliella tertiolecta using NPK-fertilizer. Optimization problems involving two- and three-objective functions were solved simultaneously. The objective functions are: maximization of algae-biomass and lipid productivity with minimization of cultivation time and cost. Time variant light intensity and temperature including NPK-fertilizer, NaCl and NaHCO 3 loadings are the important decision variables. Algal model involving Monod/Andrews adsorption kinetics and Droop model with internal nutrient cell quota was used for optimization studies. Sets of non-dominated (equally good) Pareto optimal solutions were obtained for the problems studied. It was observed that time variant optimal light intensity and temperature trajectories, including optimum NPK fertilizer, NaCl and NaHCO 3 concentration has significant influence to improve biomass and lipid productivity under minimum cultivation time and cost. Proposed optimization studies may be helpful to implement the control strategy in scale-up operation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A multi-period capacitated school location problem with modular equipment and closest assignment considerations

    NASA Astrophysics Data System (ADS)

    Delmelle, Eric M.; Thill, Jean-Claude; Peeters, Dominique; Thomas, Isabelle

    2014-07-01

    In rapidly growing urban areas, it is deemed vital to expand (or contract) an existing network of public facilities to meet anticipated changes in the level of demand. We present a multi-period capacitated median model for school network facility location planning that minimizes transportation costs, while functional costs are subject to a budget constraint. The proposed Vintage Flexible Capacitated Location Problem (ViFCLP) has the flexibility to account for a minimum school-age closing requirement, while the maximum capacity of each school can be adjusted by the addition of modular units. Non-closest assignments are controlled by the introduction of a parameter penalizing excess travel. The applicability of the ViFCLP is illustrated on a large US school system (Charlotte-Mecklenburg, North Carolina) where high school demand is expected to grow faster with distance to the city center. Higher school capacities and greater penalty on travel impedance parameter reduce the number of non-closest assignments. The proposed model is beneficial to policy makers seeking to improve the provision and efficiency of public services over a multi-period planning horizon.

  8. Superpixel Cut for Figure-Ground Image Segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Michael Ying; Rosenhahn, Bodo

    2016-06-01

    Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

  9. Hybrid coronary revascularization in the era of drug-eluting stents.

    PubMed

    Murphy, Gavin J; Bryan, Alan J; Angelini, Gianni D

    2004-11-01

    Left internal mammary artery to left anterior descending coronary artery bypass grafting integrated with percutaneous coronary angioplasty (hybrid procedure) offers multivessel revascularization with minimal morbidity in high-risk patients. This is caused in part by the avoidance of cardiopulmonary bypass-related morbidity and manipulation of the aorta coupled with minimally invasive techniques. Hybrid revascularization is currently reserved for particularly high-risk patients or those with favorable anatomic variants however, largely because of the emergence of off-pump coronary artery bypass grafting, which permits more complete multivessel revascularization, with low morbidity in high-risk groups. The wider introduction of hybrid revascularization is limited chiefly by the high number of repeat interventions compared with off-pump coronary artery bypass grafting, which occurs because of the target vessel failure rate of percutaneous coronary intervention. Other demerits are the costs and logistic problems associated with performing two procedures with differing periprocedural management protocols. Recently, drug-eluting stents have reduced the need for repeat intervention after percutaneous coronary intervention, and this has raised the possibility that the results of hybrid revascularization may now equal or even better those of off-pump coronary artery bypass grafting. Although undoubtedly effective at reducing in-stent restenosis, drug-eluting stents will not address the issues of incomplete revascularization or the logistic problems associated with hybrid. Uncertainty regarding the long-term effectiveness of drug-eluting stents in many patients, as well as their high cost when compared with those of off-pump coronary artery bypass grafting surgery, also militates against the wider introduction of hybrid revascularization.

  10. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  11. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  12. Capacity planning of link restorable optical networks under dynamic change of traffic

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2005-11-01

    Future backbone networks shall require full-survivability and support dynamic changes of traffic demands. The Generalized Survivable Networks (GSN) was proposed to meet these challenges. GSN is fully-survivable under dynamic traffic demand changes, so it offers a practical and guaranteed characterization framework for ASTN / ASON survivable network planning and bandwidth-on-demand resource allocation 4. The basic idea of GSN is to incorporate the non-blocking network concept into the survivable network models. In GSN, each network node must specify its I/O capacity bound which is taken as constraints for any allowable traffic demand matrix. In this paper, we consider the following generic GSN network design problem: Given the I/O bounds of each network node, find a routing scheme (and the corresponding rerouting scheme under failure) and the link capacity assignment (both working and spare) which minimize the cost, such that any traffic matrix consistent with the given I/O bounds can be feasibly routed and it is single-fault tolerant under the link restoration scheme. We first show how the initial, infeasible formal mixed integer programming formulation can be transformed into a more feasible problem using the duality transformation of the linear program. Then we show how the problem can be simplified using the Lagrangian Relaxation approach. Previous work has outlined a two-phase approach for solving this problem where the first phase optimizes the working capacity assignment and the second phase optimizes the spare capacity assignment. In this paper, we present a jointly optimized framework for dimensioning the survivable optical network with the GSN model. Experiment results show that the jointly optimized GSN can bring about on average of 3.8% cost savings when compared with the separate, two-phase approach. Finally, we perform a cost comparison and show that GSN can be deployed with a reasonable cost.

  13. Putting the Power of Configuration in the Hands of the Users

    NASA Technical Reports Server (NTRS)

    Al-Shihabi, Mary-Jo; Brown, Mark; Rigolini, Marianne

    2011-01-01

    Goal was to reduce the overall cost of human space flight while maintaining the most demanding standards for safety and mission success. In support of this goal, a project team was chartered to replace 18 legacy Space Shuttle nonconformance processes and systems with one fully integrated system Problem Reporting and Corrective Action (PRACA) processes provide a closed-loop system for the identification, disposition, resolution, closure, and reporting of all Space Shuttle hardware/software problems PRACA processes are integrated throughout the Space Shuttle organizational processes and are critical to assuring a safe and successful program Primary Project Objectives Develop a fully integrated system that provides an automated workflow with electronic signatures Support multiple NASA programs and contracts with a single "system" architecture Define standard processes, implement best practices, and minimize process variations

  14. Industrial ecology: a philosophical introduction.

    PubMed Central

    Frosch, R A

    1992-01-01

    By analogy with natural ecosystems, an industrial ecology system, in addition to minimizing waste production in processes, would maximize the economical use of waste materials and of products at the ends of their lives as inputs to other processes and industries. This possibility can be made real only if a number of potential problems can be solved. These include the design of wastes along with the design of products and processes, the economics of such a system, the internalizing of the costs of waste disposal to the design and choice of processes and products, the effects of regulations intended for other purposes, and problems of responsibility and liability. The various stakeholders in making the effects of industry on the environment more benign will need to adopt some new behaviors if the possibility is to become real. PMID:11607255

  15. A dimension-wise analysis method for the structural-acoustic system with interval parameters

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong

    2017-04-01

    The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.

  16. Factors influencing analysis of complex cognitive tasks: a framework and example from industrial process control.

    PubMed

    Prietula, M J; Feltovich, P J; Marchak, F

    2000-01-01

    We propose that considering four categories of task factors can facilitate knowledge elicitation efforts in the analysis of complex cognitive tasks: materials, strategies, knowledge characteristics, and goals. A study was conducted to examine the effects of altering aspects of two of these task categories on problem-solving behavior across skill levels: materials and goals. Two versions of an applied engineering problem were presented to expert, intermediate, and novice participants. Participants were to minimize the cost of running a steam generation facility by adjusting steam generation levels and flows. One version was cast in the form of a dynamic, computer-based simulation that provided immediate feedback on flows, costs, and constraint violations, thus incorporating key variable dynamics of the problem context. The other version was cast as a static computer-based model, with no dynamic components, cost feedback, or constraint checking. Experts performed better than the other groups across material conditions, and, when required, the presentation of the goal assisted the experts more than the other groups. The static group generated richer protocols than the dynamic group, but the dynamic group solved the problem in significantly less time. Little effect of feedback was found for intermediates, and none for novices. We conclude that demonstrating differences in performance in this task requires different materials than explicating underlying knowledge that leads to performance. We also conclude that substantial knowledge is required to exploit the information yielded by the dynamic form of the task or the explicit solution goal. This simple model can help to identify the contextual factors that influence elicitation and specification of knowledge, which is essential in the engineering of joint cognitive systems.

  17. Financial impact of surgical technique in the treatment of acute appendicitis in children.

    PubMed

    Litz, Cristen; Danielson, Paul D; Gould, Jay; Chandler, Nicole M

    2013-09-01

    Appendicitis is the most common emergent problem encountered by pediatric surgeons. Driven by improved cosmetic outcomes, many surgeons are offering pediatric patients single-incision laparoscopic appendectomy. We sought to investigate the financial impact of different surgical approaches to appendectomy. A retrospective study of patients with acute appendicitis undergoing appendectomy from February 2010 to September 2011 was conducted. Based on surgeon preference, patients underwent open appendectomy (OA), laparoscopic appendectomy (LA), or single-incision laparoscopic appendectomy (SILA). Demographic information, surgical outcomes, surgical supply costs, and total direct costs were recorded. A total of 465 patients underwent appendectomy during the study. The mean age of all patients was 11.2 years (range, 1 to 18 years). There were no conversions in the LA or SILA groups. There was a significant difference among surgical technique in regard to surgical supply costs (OA $159 vs. LA $650 vs. SILA $814, P < 0.01) and total direct costs (OA $2129 vs. LA $2624 vs. SILA $2991, P < 0.01). In our institution, both multiport laparoscopic and SILA carry higher costs when compared with OA, largely as a result of the cost of disposable instrumentation. Cost efficiency should be considered by surgeons when undertaking a minimally invasive approach to appendectomy.

  18. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  19. Defect-free atomic array formation using the Hungarian matching algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Woojun; Kim, Hyosub; Ahn, Jaewook

    2017-05-01

    Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.

  20. Economics of gynecologic morcellation.

    PubMed

    Bortoletto, Pietro; Friedman, Jaclyn; Milad, Magdy P

    2018-02-01

    As the Food and Drug Administration raised concern over the power morcellator in 2014, the field has seen significant change, with patients and physicians questioning which procedure is safest and most cost-effective. The economic impact of these decisions is poorly understood. Multiple new technologies have been developed to allow surgeons to continue to afford patients the many benefits of minimally invasive surgery while minimizing the risks of power morcellation. At the same time, researchers have focused on the true benefits of the power morcellator from a safety and cost perspective, and consistently found that with careful patient selection, by preventing laparotomies, it can be a cost-effective tool. Changes since 2014 have resulted in new techniques and technologies to allow these minimally invasive procedures to continue to be offered in a safe manner. With this rapid change, physicians are altering their practice and patients are attempting to educate themselves to decide what is best for them. This evolution has allowed us to refocus on the cost implications of new developments, allowing stakeholders the opportunity to maximize patient safety and surgical outcomes while minimizing cost.

  1. Heuristic Approach for Configuration of a Grid-Tied Microgrid in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Rodriguez, Miguel A.

    The high rates of cost of electricity that consumers are being charged by the utility grid in Puerto Rico have created an energy crisis around the island. This situation is due to the island's dependence on imported fossil fuels. In order to aid in the transition from fossil-fuel based electricity into electricity from renewable and alternative sources, this research work focuses on reducing the cost of electricity for Puerto Rico through means of finding the optimal microgrid configuration for a set number of consumers from the residential sector. The Hybrid Optimization Modeling for Energy Renewables (HOMER) software, developed by NREL, is utilized as an aid in determining the optimal microgrid setting. The problem is also approached via convex optimization; specifically, an objective function C(t) is formulated in order to be minimized. The cost function depends on the energy supplied by the grid, the energy supplied by renewable sources, the energy not supplied due to outages, as well as any excess energy sold to the utility in a yearly manner. A term for considering the social cost of carbon is also considered in the cost function. Once the microgrid settings from HOMER are obtained, those are evaluated via the optimized function C( t), which will in turn assess the true optimality of the microgrid configuration. A microgrid to supply 10 consumers is considered; each consumer can possess a different microgrid configuration. The cost function C( t) is minimized, and the Net Present Value and Cost of Electricity are computed for each configuration, in order to assess the true feasibility. Results show that the greater the penetration of components into the microgrid, the greater the energy produced by the renewable sources in the microgrid, the greater the energy not supplied due to outages. The proposed method demonstrates that adding large amounts of renewable components in a microgrid does not necessarily translates into economic benefits for the consumer; in fact, there is a trade back between cost and addition of elements that must be considered. Any configurations which consider further increases in microgrid components will result in increased NPV and increased costs of electricity, which deem the configurations as unfeasible.

  2. Action-minimizing solutions of the one-dimensional N-body problem

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Zhang, Shiqing

    2018-05-01

    We supplement the following result of C. Marchal on the Newtonian N-body problem: A path minimizing the Lagrangian action functional between two given configurations is always a true (collision-free) solution when the dimension d of the physical space R^d satisfies d≥2. The focus of this paper is on the fixed-ends problem for the one-dimensional Newtonian N-body problem. We prove that a path minimizing the action functional in the set of paths joining two given configurations and having all the time the same order is always a true (collision-free) solution. Considering the one-dimensional N-body problem with equal masses, we prove that (i) collision instants are isolated for a path minimizing the action functional between two given configurations, (ii) if the particles at two endpoints have the same order, then the path minimizing the action functional is always a true (collision-free) solution and (iii) when the particles at two endpoints have different order, although there must be collisions for any path, we can prove that there are at most N! - 1 collisions for any action-minimizing path.

  3. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  4. Tolerance allocation for an electronic system using neural network/Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque

    2001-12-01

    The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.

  5. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  6. Integrated Design and Implementation of Embedded Control Systems with Scilab

    PubMed Central

    Ma, Longhua; Xia, Feng; Peng, Zhe

    2008-01-01

    Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly time-consuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost. PMID:27873827

  7. Integrated Design and Implementation of Embedded Control Systems with Scilab.

    PubMed

    Ma, Longhua; Xia, Feng; Peng, Zhe

    2008-09-05

    Embedded systems are playing an increasingly important role in control engineering. Despite their popularity, embedded systems are generally subject to resource constraints and it is therefore difficult to build complex control systems on embedded platforms. Traditionally, the design and implementation of control systems are often separated, which causes the development of embedded control systems to be highly timeconsuming and costly. To address these problems, this paper presents a low-cost, reusable, reconfigurable platform that enables integrated design and implementation of embedded control systems. To minimize the cost, free and open source software packages such as Linux and Scilab are used. Scilab is ported to the embedded ARM-Linux system. The drivers for interfacing Scilab with several communication protocols including serial, Ethernet, and Modbus are developed. Experiments are conducted to test the developed embedded platform. The use of Scilab enables implementation of complex control algorithms on embedded platforms. With the developed platform, it is possible to perform all phases of the development cycle of embedded control systems in a unified environment, thus facilitating the reduction of development time and cost.

  8. Cost-effectiveness of minimally invasive sacroiliac joint fusion.

    PubMed

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W

    2016-01-01

    Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162-$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption.

  9. Cost-effectiveness of minimally invasive sacroiliac joint fusion

    PubMed Central

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée JG; Polly, David W

    2016-01-01

    Background Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. Objective To determine the cost-effectiveness of minimally invasive SIJ fusion. Methods Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. Results SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162–$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Conclusion Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption. PMID:26719717

  10. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  11. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  12. Optimal Output Trajectory Redesign for Invertible Systems

    NASA Technical Reports Server (NTRS)

    Devasia, S.

    1996-01-01

    Given a desired output trajectory, inversion-based techniques find input-state trajectories required to exactly track the output. These inversion-based techniques have been successfully applied to the endpoint tracking control of multijoint flexible manipulators and to aircraft control. The specified output trajectory uniquely determines the required input and state trajectories that are found through inversion. These input-state trajectories exactly track the desired output; however, they might not meet acceptable performance requirements. For example, during slewing maneuvers of flexible structures, the structural deformations, which depend on the required state trajectories, may be unacceptably large. Further, the required inputs might cause actuator saturation during an exact tracking maneuver, for example, in the flight control of conventional takeoff and landing aircraft. In such situations, a compromise is desired between the tracking requirement and other goals such as reduction of internal vibrations and prevention of actuator saturation; the desired output trajectory needs to redesigned. Here, we pose the trajectory redesign problem as an optimization of a general quadratic cost function and solve it in the context of linear systems. The solution is obtained as an off-line prefilter of the desired output trajectory. An advantage of our technique is that the prefilter is independent of the particular trajectory. The prefilter can therefore be precomputed, which is a major advantage over other optimization approaches. Previous works have addressed the issue of preshaping inputs to minimize residual and in-maneuver vibrations for flexible structures; Since the command preshaping is computed off-line. Further minimization of optimal quadratic cost functions has also been previously use to preshape command inputs for disturbance rejection. All of these approaches are applicable when the inputs to the system are known a priori. Typically, outputs (not inputs) are specified in tracking problems, and hence the input trajectories have to be computed. The inputs to the system are however, difficult to determine for non-minimum phase systems like flexible structures. One approach to solve this problem is to (1) choose a tracking controller (the desired output trajectory is now an input to the closed-loop system and (2) redesign this input to the closed-loop system. Thus we effectively perform output redesign. These redesigns are however, dependent on the choice of the tracking controllers. Thus the controller optimization and trajectory redesign problems become coupled; this coupled optimization is still an open problem. In contrast, we decouple the trajectory redesign problem from the choice of feedback-based tracking controller. It is noted that our approach remains valid when a particular tracking controller is chosen. In addition, the formulation of our problem not only allows for the minimization of residual vibration as in available techniques but also allows for the optimal reduction fo vibrations during the maneuver, e.g., the altitude control of flexible spacecraft. We begin by formulating the optimal output trajectory redesign problem and then solve it in the context of general linear systems. This theory is then applied to an example flexible structure, and simulation results are provided.

  13. Does Treatment Adherence Therapy reduce expense of healthcare use in patients with psychotic disorders? Cost-minimization analysis in a randomized controlled trial.

    PubMed

    Gilden, J; Staring, A B P; der Gaag, M van; Mulder, C L

    2011-12-01

    Adherence interventions in psychotic disorders have produced mixed results. Even when an intervention improved adherence, benefits to patients were unclear. Treatment Adherence Therapy (TAT) also improved adherence relative to Treatment As Usual (TAU), but it had no effects on symptoms or quality of life. TAT may or may not reduce healthcare costs. To determine whether TAT reduces the use of healthcare resources, and thus healthcare costs. Randomized controlled trial of TAT versus TAU with 98 patients. Interviews were conducted at baseline (T0), six months later, when TAT had been completed (T1) and at six-month follow-up (T2). We have used admission data and part of the Trimbos/iMTA questionnaire for Costs associated with Psychiatric Illness (TiC-P). We compared total costs in the TAT group with those in the control group with the help of multivariate analysis of covariance. TAT did not significantly minimize total costs. In the TAT group, the mean one-year health-treatment cost per patient (including TAT sessions) was € 23 003.64 (SD=19 317.95), whereas in the TAU group it was € 22 489.88 (SD=25 224.57) (F(1)=.652, p=.42). However, there were two significant differences at item-level, both with higher costs for the TAU group: psychiatric nurse contacts and legal proceedings for court-ordered admissions. Because TAT did not reduce total healthcare costs, it did not contribute to cost-minimization. Its benefits are therefore questionable. No other adherence intervention has included analysis of cost-effectiveness or cost-minimization. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Minimization of bovine tuberculosis control costs in US dairy herds

    PubMed Central

    Smith, Rebecca L.; Tauer, Loren W.; Schukken, Ynte H.; Lu, Zhao; Grohn, Yrjo T.

    2013-01-01

    The objective of this study was to minimize the cost of controlling an isolated bovine tuberculosis (bTB) outbreak in a US dairy herd, using a stochastic simulation model of bTB with economic and biological layers. A model optimizer produced a control program that required 2-month testing intervals (TI) with 2 negative whole-herd tests to leave quarantine. This control program minimized both farm and government costs. In all cases, test-and-removal costs were lower than depopulation costs, although the variability in costs increased for farms with high holding costs or small herd sizes. Increasing herd size significantly increased costs for both the farm and the government, while increasing indemnity payments significantly decreased farm costs and increasing testing costs significantly increased government costs. Based on the results of this model, we recommend 2-month testing intervals for herds after an outbreak of bovine tuberculosis, with 2 negative whole herd tests being sufficient to lift quarantine. A prolonged test and cull program may cause a state to lose its bTB-free status during the testing period. When the cost of losing the bTB-free status is greater than $1.4 million then depopulation of farms could be preferred over a test and cull program. PMID:23953679

  15. Reducing robotic prostatectomy costs by minimizing instrumentation.

    PubMed

    Delto, Joan C; Wayne, George; Yanes, Rafael; Nieder, Alan M; Bhandari, Akshay

    2015-05-01

    Since the introduction of robotic surgery for radical prostatectomy, the cost-benefit of this technology has been under scrutiny. While robotic surgery professes to offer multiple advantages, including reduced blood loss, reduced length of stay, and expedient recovery, the associated costs tend to be significantly higher, secondary to the fixed cost of the robot as well as the variable costs associated with instrumentation. This study provides a simple framework for the careful consideration of costs during the selection of equipment and materials. Two experienced robotic surgeons at our institution as well as several at other institutions were queried about their preferred instrument usage for robot-assisted prostatectomy. Costs of instruments and materials were obtained and clustered by type and price. A minimal set of instruments was identified and compared against alternative instrumentation. A retrospective review of 125 patients who underwent robotically assisted laparoscopic prostatectomy for prostate cancer at our institution was performed to compare estimated blood loss (EBL), operative times, and intraoperative complications for both surgeons. Our surgeons now conceptualize instrument costs as proportional changes to the cost of the baseline minimal combination. Robotic costs at our institution were reduced by eliminating an energy source like the Ligasure or vessel sealer, exploiting instrument versatility, and utilizing inexpensive tools such as Hem-o-lok clips. Such modifications reduced surgeon 1's cost of instrumentation to ∼40% less compared with surgeon 2 and up to 32% less than instrumentation used by surgeons at other institutions. Surgeon 1's combination may not be optimal for all robotic surgeons; however, it establishes a minimally viable toolbox for our institution through a rudimentary cost analysis. A similar analysis may aid others in better conceptualizing long-term costs not as nominal, often unwieldy prices, but as percent changes in spending. With regard to intraoperative outcomes, the use of a minimally viable toolbox did not result in increased EBL, operative time, or intraoperative complications. Simple changes to surgeon preference and creative utilization of instruments can eliminate 40% of costs incurred on robotic instruments alone. Moreover, EBL, operative times, and intraoperative complications are not compromised as a result of cost reduction. Our process of identifying such improvements is straightforward and may be replicated by other robotic surgeons. Further prospective multicenter trials should be initiated to assess other methods of cost reduction.

  16. International Space Station (ISS) Advanced Recycle Filter Tank Assembly (ARFTA)

    NASA Technical Reports Server (NTRS)

    Nasrullah, Mohammed K.

    2013-01-01

    The International Space Station (ISS) Recycle Filter Tank Assembly (RFTA) provides the following three primary functions for the Urine Processor Assembly (UPA): volume for concentrating/filtering pretreated urine, filtration of product distillate, and filtration of the Pressure Control and Pump Assembly (PCPA) effluent. The RFTAs, under nominal operations, are to be replaced every 30 days. This poses a significant logistical resupply problem, as well as cost in upmass and new tanks purchase. In addition, it requires significant amount of crew time. To address and resolve these challenges, NASA required Boeing to develop a design which eliminated the logistics and upmass issues and minimize recurring costs. Boeing developed the Advanced Recycle Filter Tank Assembly (ARFTA) that allowed the tanks to be emptied on-orbit into disposable tanks that eliminated the need for bringing the fully loaded tanks to earth for refurbishment and relaunch, thereby eliminating several hundred pounds of upmass and its associated costs. The ARFTA will replace the RFTA by providing the same functionality, but with reduced resupply requirements

  17. Removal of heavy metals from emerging cellulosic low-cost adsorbents: a review

    NASA Astrophysics Data System (ADS)

    Malik, D. S.; Jain, C. K.; Yadav, Anuj K.

    2017-09-01

    Heavy metal pollution is a major problems in the environment. The impact of toxic metal ions can be minimized by different technologies, viz., chemical precipitation, membrane filtration, oxidation, reverse osmosis, flotation and adsorption. But among them, adsorption was found to be very efficient and common due to the low concentration of metal uptake and economically feasible properties. Cellulosic materials are of low cost and widely used, and very promising for the future. These are available in abundant quantity, are cheap and have low or little economic value. Different forms of cellulosic materials are used as adsorbents such as fibers, leaves, roots, shells, barks, husks, stems and seed as well as other parts also. Natural and modified types of cellulosic materials are used in different metal detoxifications in water and wastewater. In this review paper, the most common and recent materials are reviewed as cellulosic low-cost adsorbents. The elemental properties of cellulosic materials are also discussed along with their cellulose, hemicelluloses and lignin contents.

  18. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  19. An integrated production-inventory model for the singlevendor two-buyer problem with partial backorder, stochastic demand, and service level constraints

    NASA Astrophysics Data System (ADS)

    Arfawi Kurdhi, Nughthoh; Adi Diwiryo, Toray; Sutanto

    2016-02-01

    This paper presents an integrated single-vendor two-buyer production-inventory model with stochastic demand and service level constraints. Shortage is permitted in the model, and partial backordered partial lost sale. The lead time demand is assumed follows a normal distribution and the lead time can be reduced by adding crashing cost. The lead time and ordering cost reductions are interdependent with logaritmic function relationship. A service level constraint policy corresponding to each buyer is considered in the model in order to limit the level of inventory shortages. The purpose of this research is to minimize joint total cost inventory model by finding the optimal order quantity, safety stock, lead time, and the number of lots delivered in one production run. The optimal production-inventory policy gained by the Lagrange method is shaped to account for the service level restrictions. Finally, a numerical example and effects of the key parameters are performed to illustrate the results of the proposed model.

  20. Spacelab cost reduction alternatives study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Alternative approaches to payload operations planning and control and flight crew training are defined for spacelab payloads with the goal of: lowering FY77 and FY 78 costs for new starts; lowering costs to achieve Spacelab operational capability; and minimizing the cost per Spacelab flight. These alternatives attempt to minimize duplication of hardware, software, and personnel, and the investment in supporting facility and equipment. Of particular importance is the possible reduction of equipment, software, and manpower resources such as comtational systems, trainers, and simulators.

Top