Sample records for processing costs optimal

  1. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  2. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  3. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  4. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  5. Technical and economical optimization of a full-scale poultry manure treatment process: total ammonia nitrogen balance.

    PubMed

    Alejo-Alvarez, Luz; Guzmán-Fierro, Víctor; Fernández, Katherina; Roeckel, Marlene

    2016-11-01

    A full-scale process for the treatment of 80 tons per day of poultry manure was designed and optimized. A total ammonia nitrogen (TAN) balance was performed at steady state, considering the stoichiometry and the kinetic data from the anaerobic digestion and the anaerobic ammonia oxidation. The equipment, reactor design, investment costs, and operational costs were considered. The volume and cost objective functions optimized the process in terms of three variables: the water recycle ratio, the protein conversion during AD, and the TAN conversion in the process. The processes were compared with and without water recycle; savings of 70% and 43% in the annual fresh water consumption and the heating costs, respectively, were achieved. The optimal process complies with the Chilean environmental legislation limit of 0.05 g total nitrogen/L.

  6. Fuzzy multi-objective optimization case study based on an anaerobic co-digestion process of food waste leachate and piggery wastewater.

    PubMed

    Choi, Angelo Earvin Sy; Park, Hung Suck

    2018-06-20

    This paper presents the development and evaluation of fuzzy multi-objective optimization for decision-making that includes the process optimization of anaerobic digestion (AD) process. The operating cost criteria which is a fundamental research gap in previous AD analysis was integrated for the case study in this research. In this study, the mixing ratio of food waste leachate (FWL) and piggery wastewater (PWW), calcium carbonate (CaCO 3 ) and sodium chloride (NaCl) concentrations were optimized to enhance methane production while minimizing operating cost. The results indicated a maximum of 63.3% satisfaction for both methane production and operating cost under the following optimal conditions: mixing ratio (FWL: PWW) - 1.4, CaCO 3 - 2970.5 mg/L and NaCl - 2.7 g/L. In multi-objective optimization, the specific methane yield (SMY) was 239.0 mL CH 4 /g VS added , while 41.2% volatile solids reduction (VSR) was obtained at an operating cost of 56.9 US$/ton. In comparison with the previous optimization study that utilized the response surface methodology, the SMY, VSR and operating cost of the AD process were 310 mL/g, 54% and 83.2 US$/ton, respectively. The results from multi-objective fuzzy optimization proves to show the potential application of this technique for practical decision-making in the process optimization of AD process. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Advanced in-duct sorbent injection for SO{sub 2} control. Topical report No. 2, Subtask 2.2: Design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenhoover, W.A.; Stouffer, M.R.; Withum, J.A.

    1994-12-01

    The objective of this research project is to develop second-generation duct injection technology as a cost-effective SO{sub 2} control option for the 1990 Clean Air Act Amendments. Research is focused on the Advanced Coolside process, which has shown the potential for achieving the performance targets of 90% SO{sub 2} removal and 60% sorbent utilization. In Subtask 2.2, Design Optimization, process improvement was sought by optimizing sorbent recycle and by optimizing process equipment for reduced cost. The pilot plant recycle testing showed that 90% SO{sub 2} removal could be achieved at sorbent utilizations up to 75%. This testing also showed thatmore » the Advanced Coolside process has the potential to achieve very high removal efficiency (90 to greater than 99%). Two alternative contactor designs were developed, tested and optimized through pilot plant testing; the improved designs will reduce process costs significantly, while maintaining operability and performance essential to the process. Also, sorbent recycle handling equipment was optimized to reduce cost.« less

  8. An effective and optimal quality control approach for green energy manufacturing using design of experiments framework and evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Saavedra, Juan Alejandro

    Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.

  9. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process.

    PubMed

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S; Jazar, Reza N; Khayyam, Hamid

    2018-03-05

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large.

  10. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process

    PubMed Central

    Golkarnarenji, Gelayol; Naebe, Minoo; Badii, Khashayar; Milani, Abbas S.; Jazar, Reza N.; Khayyam, Hamid

    2018-01-01

    To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR) and Artificial Neural Network (ANN), were studied and compared, with a limited dataset obtained to predict physical property (density) of oxidative stabilized PAN fiber (OPF) in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large. PMID:29510592

  11. Simulative design and process optimization of the two-stage stretch-blow molding process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less

  12. Simulative design and process optimization of the two-stage stretch-blow molding process

    NASA Astrophysics Data System (ADS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  13. Biodiesel production process from microalgae oil by waste heat recovery and process integration.

    PubMed

    Song, Chunfeng; Chen, Guanyi; Ji, Na; Liu, Qingling; Kansha, Yasuki; Tsutsumi, Atsushi

    2015-10-01

    In this work, the optimization of microalgae oil (MO) based biodiesel production process is carried out by waste heat recovery and process integration. The exergy analysis of each heat exchanger presented an efficient heat coupling between hot and cold streams, thus minimizing the total exergy destruction. Simulation results showed that the unit production cost of optimized process is 0.592$/L biodiesel, and approximately 0.172$/L biodiesel can be avoided by heat integration. Although the capital cost of the optimized biodiesel production process increased 32.5% and 23.5% compared to the reference cases, the operational cost can be reduced by approximately 22.5% and 41.6%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Two-step optimization of pressure and recovery of reverse osmosis desalination process.

    PubMed

    Liang, Shuang; Liu, Cui; Song, Lianfa

    2009-05-01

    Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.

  15. Cost-effectiveness analysis of TOC removal from slaughterhouse wastewater using combined anaerobic-aerobic and UV/H2O2 processes.

    PubMed

    Bustillo-Lecompte, Ciro Fernando; Mehrvar, Mehrab; Quiñones-Bolaños, Edgar

    2014-02-15

    The objective of this study is to evaluate the operating costs of treating slaughterhouse wastewater (SWW) using combined biological and advanced oxidation processes (AOPs). This study compares the performance and the treatment capability of an anaerobic baffled reactor (ABR), an aerated completely mixed activated sludge reactor (AS), and a UV/H2O2 process, as well as their combination for the removal of the total organic carbon (TOC). Overall efficiencies are found to be up to 75.22, 89.47, 94.53, 96.10, 96.36, and 99.98% for the UV/H2O2, ABR, AS, combined AS-ABR, combined ABR-AS, and combined ABR-AS-UV/H2O2 processes, respectively. Due to the consumption of electrical energy and reagents, operating costs are calculated at optimal conditions of each process. A cost-effectiveness analysis (CEA) is performed at optimal conditions for the SWW treatment by optimizing the total electricity cost, H2O2 consumption, and hydraulic retention time (HRT). The combined ABR-AS-UV/H2O2 processes have an optimal TOC removal of 92.46% at an HRT of 41 h, a cost of $1.25/kg of TOC removed, and $11.60/m(3) of treated SWW. This process reaches a maximum TOC removal of 99% in 76.5 h with an estimated cost of $2.19/kg TOC removal and $21.65/m(3) treated SWW, equivalent to $6.79/m(3) day. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  17. Optimizing conceptual aircraft designs for minimum life cycle cost

    NASA Technical Reports Server (NTRS)

    Johnson, Vicki S.

    1989-01-01

    A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.

  18. Cost studies for commercial fuselage crown designs

    NASA Technical Reports Server (NTRS)

    Walker, T. H.; Smith, P. J.; Truslove, G.; Willden, K. S.; Metschan, S. L.; Pfahl, C. L.

    1991-01-01

    Studies were conducted to evaluate the cost and weight potential of advanced composite design concepts in the crown region of a commercial transport. Two designs from each of three design families were developed using an integrated design-build team. A range of design concepts and manufacturing processes were included to allow isolation and comparison of cost centers. Detailed manufacturing/assembly plans were developed as the basis for cost estimates. Each of the six designs was found to have advantages over the 1995 aluminum benchmark in cost and weight trade studies. Large quadrant panels and cobonded frames were found to save significant assembly labor costs. Comparisons of high- and intermediate-performance fiber systems were made for skin and stringer applications. Advanced tow placement was found to be an efficient process for skin lay up. Further analysis revealed attractive processes for stringers and frames. Optimized designs were informally developed for each design family, combining the most attractive concepts and processes within that family. A single optimized design was selected as the most promising, and the potential for further optimization was estimated. Technical issues and barriers were identified.

  19. Optimal synthesis and design of the number of cycles in the leaching process for surimi production.

    PubMed

    Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F

    2016-12-01

    Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.

  20. New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program

    NASA Technical Reports Server (NTRS)

    Strain, D.; Levy, R.

    1986-01-01

    The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.

  1. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  2. A Technical Survey on Optimization of Processing Geo Distributed Data

    NASA Astrophysics Data System (ADS)

    Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.

    2018-04-01

    With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.

  3. Impact of Capital and Current Costs Changes of the Incineration Process of the Medical Waste on System Management Cost

    NASA Astrophysics Data System (ADS)

    Jolanta Walery, Maria

    2017-12-01

    The article describes optimization studies aimed at analysing the impact of capital and current costs changes of medical waste incineration on the cost of the system management and its structure. The study was conducted on the example of an analysis of the system of medical waste management in the Podlaskie Province, in north-eastern Poland. The scope of operational research carried out under the optimization study was divided into two stages of optimization calculations with assumed technical and economic parameters of the system. In the first stage, the lowest cost of functioning of the analysed system was generated, whereas in the second one the influence of the input parameter of the system, i.e. capital and current costs of medical waste incineration on economic efficiency index (E) and the spatial structure of the system was determined. Optimization studies were conducted for the following cases: with a 25% increase in capital and current costs of incineration process, followed by 50%, 75% and 100% increase. As a result of the calculations, the highest cost of system operation was achieved at the level of 3143.70 PLN/t with the assumption of 100% increase in capital and current costs of incineration process. There was an increase in the economic efficiency index (E) by about 97% in relation to run 1.

  4. Good Manufacturing Practices (GMP) manufacturing of advanced therapy medicinal products: a novel tailored model for optimizing performance and estimating costs.

    PubMed

    Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra

    2013-03-01

    Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  5. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  6. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  7. Optimization of A(2)O BNR processes using ASM and EAWAG Bio-P models: model performance.

    PubMed

    El Shorbagy, Walid E; Radif, Nawras N; Droste, Ronald L

    2013-12-01

    This paper presents the performance of an optimization model for a biological nutrient removal (BNR) system using the anaerobic-anoxic-oxic (A(2)O) process. The formulated model simulates removal of organics, nitrogen, and phosphorus using a reduced International Water Association (IWA) Activated Sludge Model #3 (ASM3) model and a Swiss Federal Institute for Environmental Science and Technology (EAWAG) Bio-P module. Optimal sizing is attained considering capital and operational costs. Process performance is evaluated against the effect of influent conditions, effluent limits, and selected parameters of various optimal solutions with the following results: an increase of influent temperature from 10 degrees C to 25 degrees C decreases the annual cost by about 8.5%, an increase of influent flow from 500 to 2500 m(3)/h triples the annual cost, the A(2)O BNR system is more sensitive to variations in influent ammonia than phosphorus concentration and the maximum growth rate of autotrophic biomass was the most sensitive kinetic parameter in the optimization model.

  8. Methodology for the optimal design of an integrated first and second generation ethanol production plant combined with power cogeneration.

    PubMed

    Bechara, Rami; Gomez, Adrien; Saint-Antonin, Valérie; Schweitzer, Jean-Marc; Maréchal, François

    2016-08-01

    The application of methodologies for the optimal design of integrated processes has seen increased interest in literature. This article builds on previous works and applies a systematic methodology to an integrated first and second generation ethanol production plant with power cogeneration. The methodology breaks into process simulation, heat integration, thermo-economic evaluation, exergy efficiency vs. capital costs, multi-variable, evolutionary optimization, and process selection via profitability maximization. Optimization generated Pareto solutions with exergy efficiency ranging between 39.2% and 44.4% and capital costs from 210M$ to 390M$. The Net Present Value was positive for only two scenarios and for low efficiency, low hydrolysis points. The minimum cellulosic ethanol selling price was sought to obtain a maximum NPV of zero for high efficiency, high hydrolysis alternatives. The obtained optimal configuration presented maximum exergy efficiency, hydrolyzed bagasse fraction, capital costs and ethanol production rate, and minimum cooling water consumption and power production rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Electric Propulsion System Selection Process for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul

    2008-01-01

    The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.

  10. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  11. Optimal design of the satellite constellation arrangement reconfiguration process

    NASA Astrophysics Data System (ADS)

    Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid

    2016-08-01

    In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.

  12. Cooperative optimization of reconfigurable machine tool configurations and production process plan

    NASA Astrophysics Data System (ADS)

    Xie, Nan; Li, Aiping; Xue, Wei

    2012-09-01

    The production process plan design and configurations of reconfigurable machine tool (RMT) interact with each other. Reasonable process plans with suitable configurations of RMT help to improve product quality and reduce production cost. Therefore, a cooperative strategy is needed to concurrently solve the above issue. In this paper, the cooperative optimization model for RMT configurations and production process plan is presented. Its objectives take into account both impacts of process and configuration. Moreover, a novel genetic algorithm is also developed to provide optimal or near-optimal solutions: firstly, its chromosome is redesigned which is composed of three parts, operations, process plan and configurations of RMTs, respectively; secondly, its new selection, crossover and mutation operators are also developed to deal with the process constraints from operation processes (OP) graph, otherwise these operators could generate illegal solutions violating the limits; eventually the optimal configurations for RMT under optimal process plan design can be obtained. At last, a manufacturing line case is applied which is composed of three RMTs. It is shown from the case that the optimal process plan and configurations of RMT are concurrently obtained, and the production cost decreases 6.28% and nonmonetary performance increases 22%. The proposed method can figure out both RMT configurations and production process, improve production capacity, functions and equipment utilization for RMT.

  13. Parametric Cost Analysis: A Design Function

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1989-01-01

    Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.

  14. Advanced Structural Optimization Under Consideration of Cost Tracking

    NASA Astrophysics Data System (ADS)

    Zell, D.; Link, T.; Bickelmaier, S.; Albinger, J.; Weikert, S.; Cremaschi, F.; Wiegand, A.

    2014-06-01

    In order to improve the design process of launcher configurations in the early development phase, the software Multidisciplinary Optimization (MDO) was developed. The tool combines different efficient software tools such as Optimal Design Investigations (ODIN) for structural optimizations, Aerospace Trajectory Optimization Software (ASTOS) for trajectory and vehicle design optimization for a defined payload and mission.The present paper focuses to the integration and validation of ODIN. ODIN enables the user to optimize typical axis-symmetric structures by means of sizing the stiffening designs concerning strength and stability while minimizing the structural mass. In addition a fully automatic finite element model (FEM) generator module creates ready-to-run FEM models of a complete stage or launcher assembly.Cost tracking respectively future improvements concerning cost optimization are indicated.

  15. Optimal design of upstream processes in biotransformation technologies.

    PubMed

    Dheskali, Endrit; Michailidi, Katerina; de Castro, Aline Machado; Koutinas, Apostolis A; Kookos, Ioannis K

    2017-01-01

    In this work a mathematical programming model for the optimal design of the bioreaction section of biotechnological processes is presented. Equations for the estimation of the equipment cost derived from a recent publication by the US National Renewable Energy Laboratory (NREL) are also summarized. The cost-optimal design of process units and the optimal scheduling of their operation can be obtained using the proposed formulation that has been implemented in software available from the journal web page or the corresponding author. The proposed optimization model can be used to quantify the effects of decisions taken at a lab scale on the industrial scale process economics. It is of paramount important to note that this can be achieved at the early stage of the development of a biotechnological project. Two case studies are presented that demonstrate the usefulness and potential of the proposed methodology. Copyright © 2016. Published by Elsevier Ltd.

  16. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  17. Integration of Mahalanobis-Taguchi system and traditional cost accounting for remanufacturing crankshaft

    NASA Astrophysics Data System (ADS)

    Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd

    2018-04-01

    Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.

  18. How to design the cost-effectiveness appraisal process of new healthcare technologies to maximise population health: A conceptual framework.

    PubMed

    Johannesen, Kasper M; Claxton, Karl; Sculpher, Mark J; Wailoo, Allan J

    2018-02-01

    This paper presents a conceptual framework to analyse the design of the cost-effectiveness appraisal process of new healthcare technologies. The framework characterises the appraisal processes as a diagnostic test aimed at identifying cost-effective (true positive) and non-cost-effective (true negative) technologies. Using the framework, factors that influence the value of operating an appraisal process, in terms of net gain to population health, are identified. The framework is used to gain insight into current policy questions including (a) how rigorous the process should be, (b) who should have the burden of proof, and (c) how optimal design changes when allowing for appeals, price reductions, resubmissions, and re-evaluations. The paper demonstrates that there is no one optimal appraisal process and the process should be adapted over time and to the specific technology under assessment. Optimal design depends on country-specific features of (future) technologies, for example, effect, price, and size of the patient population, which might explain the difference in appraisal processes across countries. It is shown that burden of proof should be placed on the producers and that the impact of price reductions and patient access schemes on the producer's price setting should be considered when designing the appraisal process. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  20. Coordinated and uncoordinated optimization of networks

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-06-01

    In this paper, we consider spatial networks that realize a balance between an infrastructure cost (the cost of wire needed to connect the network in space) and communication efficiency, measured by average shortest path length. A global optimization procedure yields network topologies in which this balance is optimized. These are compared with network topologies generated by a competitive process in which each node strives to optimize its own cost-communication balance. Three phases are observed in globally optimal configurations for different cost-communication trade offs: (i) regular small worlds, (ii) starlike networks, and (iii) trees with a center of interconnected hubs. In the latter regime, i.e., for very expensive wire, power laws in the link length distributions P(w)∝w-α are found, which can be explained by a hierarchical organization of the networks. In contrast, in the local optimization process the presence of sharp transitions between different network regimes depends on the dimension of the underlying space. Whereas for d=∞ sharp transitions between fully connected networks, regular small worlds, and highly cliquish periphery-core networks are found, for d=1 sharp transitions are absent and the power law behavior in the link length distribution persists over a much wider range of link cost parameters. The measured power law exponents are in agreement with the hypothesis that the locally optimized networks consist of multiple overlapping suboptimal hierarchical trees.

  1. Allogeneic cell therapy bioprocess economics and optimization: downstream processing decisions.

    PubMed

    Hassan, Sally; Simaria, Ana S; Varadaraju, Hemanthram; Gupta, Siddharth; Warren, Kim; Farid, Suzanne S

    2015-01-01

    To develop a decisional tool to identify the most cost effective process flowsheets for allogeneic cell therapies across a range of production scales. A bioprocess economics and optimization tool was built to assess competing cell expansion and downstream processing (DSP) technologies. Tangential flow filtration was generally more cost-effective for the lower cells/lot achieved in planar technologies and fluidized bed centrifugation became the only feasible option for handling large bioreactor outputs. DSP bottlenecks were observed at large commercial lot sizes requiring multiple large bioreactors. The DSP contribution to the cost of goods/dose ranged between 20-55%, and 50-80% for planar and bioreactor flowsheets, respectively. This analysis can facilitate early decision-making during process development.

  2. Performance Management and Optimization of Semiconductor Design Projects

    NASA Astrophysics Data System (ADS)

    Hinrichs, Neele; Olbrich, Markus; Barke, Erich

    2010-06-01

    The semiconductor industry is characterized by fast technological changes and small time-to-market windows. Improving productivity is the key factor to stand up to the competitors and thus successfully persist in the market. In this paper a Performance Management System for analyzing, optimizing and evaluating chip design projects is presented. A task graph representation is used to optimize the design process regarding time, cost and workload of resources. Key Performance Indicators are defined in the main areas cost, profit, resources, process and technical output to appraise the project.

  3. Optimization and cost estimation of novel wheat biorefining for continuous production of fermentation feedstock.

    PubMed

    Arifeen, Najmul; Wang, Ruohang; Kookos, Ioannis; Webb, Colin; Koutinas, Apostolis A

    2007-01-01

    A wheat-based continuous process for the production of a nutrient-complete feedstock for bioethanol production by yeast fermentation has been cost-optimized. This process could substitute for the current wheat dry milling process employed in industry for bioethanol production. Each major wheat component (bran, gluten, starch) is extracted and processed for different end-uses. The separate stages, liquefaction and saccharification, used currently in industry for starch hydrolysis have been integrated into a simplified continuous process by exploiting the complex enzymatic consortium produced by on-site fungal bioconversions. A process producing 120 m3 h-1 nutrient-complete feedstock for bioethanol production containing 250 g L-1 glucose and 0.85 g L-1 free amino nitrogen would result in a production cost of $0.126/kg glucose.

  4. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  5. Biodiesel production from low cost and renewable feedstock

    NASA Astrophysics Data System (ADS)

    Gude, Veera G.; Grant, Georgene E.; Patil, Prafulla D.; Deng, Shuguang

    2013-12-01

    Sustainable biodiesel production should: a) utilize low cost renewable feedstock; b) utilize energy-efficient, nonconventional heating and mixing techniques; c) increase net energy benefit of the process; and d) utilize renewable feedstock/energy sources where possible. In this paper, we discuss the merits of biodiesel production following these criteria supported by the experimental results obtained from the process optimization studies. Waste cooking oil, non-edible (low-cost) oils (Jatropha curcas and Camelina Sativa) and algae were used as feedstock for biodiesel process optimization. A comparison between conventional and non-conventional methods such as microwaves and ultrasound was reported. Finally, net energy scenarios for different biodiesel feedstock options and algae are presented.

  6. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    DTIC Science & Technology

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  7. An experimental strategy validated to design cost-effective culture media based on response surface methodology.

    PubMed

    Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H

    2017-07-03

    For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.

  8. Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System

    NASA Astrophysics Data System (ADS)

    Huang, Long; Feng, Xiao; Chu, Khim H.

    2010-11-01

    Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.

  9. Analyzing the Effect of Multi-fuel and Practical Constraints on Realistic Economic Load Dispatch using Novel Two-stage PSO

    NASA Astrophysics Data System (ADS)

    Chintalapudi, V. S.; Sirigiri, Sivanagaraju

    2017-04-01

    In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.

  10. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Milestones of mathematical model for business process management related to cost estimate documentation in petroleum industry

    NASA Astrophysics Data System (ADS)

    Khamidullin, R. I.

    2018-05-01

    The paper is devoted to milestones of the optimal mathematical model for a business process related to cost estimate documentation compiled during construction and reconstruction of oil and gas facilities. It describes the study and analysis of fundamental issues in petroleum industry, which are caused by economic instability and deterioration of a business strategy. Business process management is presented as business process modeling aimed at the improvement of the studied business process, namely main criteria of optimization and recommendations for the improvement of the above-mentioned business model.

  12. Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems.

    PubMed

    Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong

    2014-12-01

    In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.

  13. Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Deyu

    2012-03-01

    In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.

  14. An optimal policy for a single-vendor and a single-buyer integrated system with setup cost reduction and process-quality improvement

    NASA Astrophysics Data System (ADS)

    Shu, Hui; Zhou, Xideng

    2014-05-01

    The single-vendor single-buyer integrated production inventory system has been an object of study for a long time, but little is known about the effect of investing in reducing setup cost reduction and process-quality improvement for an integrated inventory system in which the products are sold with free minimal repair warranty. The purpose of this article is to minimise the integrated cost by optimising simultaneously the number of shipments and the shipment quantity, the setup cost, and the process quality. An efficient algorithm procedure is proposed for determining the optimal decision variables. A numerical example is presented to illustrate the results of the proposed models graphically. Sensitivity analysis of the model with respect to key parameters of the system is carried out. The paper shows that the proposed integrated model can result in significant savings in the integrated cost.

  15. Spatial multiobjective optimization of agricultural conservation practices using a SWAT model and an evolutionary algorithm.

    PubMed

    Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine

    2012-12-09

    Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,(5,12,20)) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods(3,4,9,10,13-15,17-19,22,23,25). In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model(7) with a multiobjective evolutionary algorithm SPEA2(26), and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.

  16. Enabling Incremental Query Re-Optimization.

    PubMed

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  17. Enabling Incremental Query Re-Optimization

    PubMed Central

    Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau

    2017-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658

  18. Superstructure-based Design and Optimization of Batch Biodiesel Production Using Heterogeneous Catalysts

    NASA Astrophysics Data System (ADS)

    Nuh, M. Z.; Nasir, N. F.

    2017-08-01

    Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.

  19. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  20. Estimation of the laser cutting operating cost by support vector regression methodology

    NASA Astrophysics Data System (ADS)

    Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam

    2016-09-01

    Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.

  1. Study on Operation Optimization of Pumping Station's 24 Hours Operation under Influences of Tides and Peak-Valley Electricity Prices

    NASA Astrophysics Data System (ADS)

    Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang

    2010-06-01

    According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.

  2. Designer's unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.

    1992-01-01

    A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  3. Composite fuselage crown panel manufacturing technology

    NASA Technical Reports Server (NTRS)

    Willden, Kurtis; Metschan, S.; Grant, C.; Brown, T.

    1992-01-01

    Commercial fuselage structures contain significant challenges in attempting to save manufacturing costs with advanced composite technology. Assembly issues, materials costs, and fabrication of elements with complex geometry are each expected to drive the cost of composite fuselage structure. Key technologies, such as large crown panel fabrication, were pursued for low cost. An intricate bond panel design and manufacturing concept were selected based on the efforts of the Design Build Team. The manufacturing processes selected for the intricate bond design include multiple large panel fabrication with Advanced Tow Placement (ATP) process, innovative cure tooling concepts, resin transfer molding of long fuselage frames, and use of low cost materials forms. The process optimization for final design/manufacturing configuration included factory simulations and hardware demonstrations. These efforts and other optimization tasks were instrumental in reducing costs by 18 pct. and weight by 45 pct. relative to an aluminum baseline. The qualitative and quantitative results of the manufacturing demonstrations were used to assess manufacturing risks and technology readiness.

  4. Composite fuselage crown panel manufacturing technology

    NASA Technical Reports Server (NTRS)

    Willden, Kurtis; Metschan, S.; Grant, C.; Brown, T.

    1992-01-01

    Commercial fuselage structures contain significant challenges in attempting to save manufacturing costs with advanced composite technology. Assembly issues, material costs, and fabrication of elements with complex geometry are each expected to drive the cost of composite fuselage structures. Boeing's efforts under the NASA ACT program have pursued key technologies for low-cost, large crown panel fabrication. An intricate bond panel design and manufacturing concepts were selected based on the efforts of the Design Build Team (DBT). The manufacturing processes selected for the intricate bond design include multiple large panel fabrication with the Advanced Tow Placement (ATP) process, innovative cure tooling concepts, resin transfer molding of long fuselage frames, and utilization of low-cost material forms. The process optimization for final design/manufacturing configuration included factory simulations and hardware demonstrations. These efforts and other optimization tasks were instrumental in reducing cost by 18 percent and weight by 45 percent relative to an aluminum baseline. The qualitative and quantitative results of the manufacturing demonstrations were used to assess manufacturing risks and technology readiness.

  5. Bioreactor performance: a more scientific approach for practice.

    PubMed

    Lübbert, A; Bay Jørgensen, S

    2001-02-13

    In practice, the performance of a biochemical conversion process, i.e. the bioreactor performance, is essentially determined by the benefit/cost ratio. The benefit is generally defined in terms of the amount of the desired product produced and its market price. Cost reduction is the major objective in biochemical engineering. There are two essential engineering approaches to minimizing the cost of creating a particular product in an existing plant. One is to find a control path or operational procedure that optimally uses the dynamics of the process and copes with the many constraints restricting production. The other is to remove or lower the constraints by constructive improvements of the equipment and/or the microorganisms. This paper focuses on the first approach, dealing with optimization of the operational procedure and the measures by which one can ensure that the process adheres to the predetermined path. In practice, feedforward control is the predominant control mode applied. However, as it is frequently inadequate for optimal performance, feedback control may also be employed. Relevant aspects of such performance optimization are discussed.

  6. Development of cost-effective surfactant flooding technology, Quarterly report, October 1995--December 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Sepehrnoori, K.

    1995-12-31

    The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less

  7. A design optimization process for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Chamberlain, Robert G.; Fox, George; Duquette, William H.

    1990-01-01

    The Space Station Freedom Program is used to develop and implement a process for design optimization. Because the relative worth of arbitrary design concepts cannot be assessed directly, comparisons must be based on designs that provide the same performance from the point of view of station users; such designs can be compared in terms of life cycle cost. Since the technology required to produce a space station is widely dispersed, a decentralized optimization process is essential. A formulation of the optimization process is provided and the mathematical models designed to facilitate its implementation are described.

  8. Bioregenerative food system cost based on optimized menus for advanced life support

    NASA Technical Reports Server (NTRS)

    Waters, Geoffrey C R.; Olabi, Ammar; Hunter, Jean B.; Dixon, Mike A.; Lasseur, Christophe

    2002-01-01

    Optimized menus for a bioregenerative life support system have been developed based on measures of crop productivity, food item acceptability, menu diversity, and nutritional requirements of crew. Crop-specific biomass requirements were calculated from menu recipe demands while accounting for food processing and preparation losses. Under the assumption of staggered planting, the optimized menu demanded a total crop production area of 453 m2 for six crew. Cost of the bioregenerative food system is estimated at 439 kg per menu cycle or 7.3 kg ESM crew-1 day-1, including agricultural waste processing costs. On average, about 60% (263.6 kg ESM) of the food system cost is tied up in equipment, 26% (114.2 kg ESM) in labor, and 14% (61.5 kg ESM) in power and cooling. This number is high compared to the STS and ISS (nonregenerative) systems but reductions in ESM may be achieved through intensive crop productivity improvements, reductions in equipment masses associated with crop production, and planning of production, processing, and preparation to minimize the requirement for crew labor.

  9. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  10. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  11. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  12. A hybrid optimization approach in non-isothermal glass molding

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  13. Optimizing sterilization logistics in hospitals.

    PubMed

    van de Klundert, Joris; Muls, Philippe; Schadd, Maarten

    2008-03-01

    This paper deals with the optimization of the flow of sterile instruments in hospitals which takes place between the sterilization department and the operating theatre. This topic is especially of interest in view of the current attempts of hospitals to cut cost by outsourcing sterilization tasks. Oftentimes, outsourcing implies placing the sterilization unit at a larger distance, hence introducing a longer logistic loop, which may result in lower instrument availability, and higher cost. This paper discusses the optimization problems that have to be solved when redesigning processes so as to improve material availability and reduce cost. We consider changing the logistic management principles, use of visibility information, and optimizing the composition of the nets of sterile materials.

  14. No Cost – Low Cost Compressed Air System Optimization in Industry

    NASA Astrophysics Data System (ADS)

    Dharma, A.; Budiarsa, N.; Watiniasih, N.; Antara, N. G.

    2018-04-01

    Energy conservation is a systematic, integrated of effort, in order to preserve energy sources and improve energy utilization efficiency. Utilization of energy in efficient manner without reducing the energy usage it must. Energy conservation efforts are applied at all stages of utilization, from utilization of energy resources to final, using efficient technology, and cultivating an energy-efficient lifestyle. The most common way is to promote energy efficiency in the industry on end use and overcome barriers to achieve such efficiency by using system energy optimization programs. The facts show that energy saving efforts in the process usually only focus on replacing tools and not an overall system improvement effort. In this research, a framework of sustainable energy reduction work in companies that have or have not implemented energy management system (EnMS) will be conducted a systematic technical approach in evaluating accurately a compressed-air system and potential optimization through observation, measurement and verification environmental conditions and processes, then processing the physical quantities of systems such as air flow, pressure and electrical power energy at any given time measured using comparative analysis methods in this industry, to provide the potential savings of energy saving is greater than the component approach, with no cost to the lowest cost (no cost - low cost). The process of evaluating energy utilization and energy saving opportunities will provide recommendations for increasing efficiency in the industry and reducing CO2 emissions and improving environmental quality.

  15. Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes

    PubMed Central

    Dobos, László; Király, András; Abonyi, János

    2012-01-01

    Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298

  16. Optimal design of zero-water discharge rinsing systems.

    PubMed

    Thöming, Jorg

    2002-03-01

    This paper is about zero liquid discharge in processes that use water for rinsing. Emphasis was given to those systems that contaminate process water with valuable process liquor and compounds. The approach involved the synthesis of optimal rinsing and recycling networks (RRN) that had a priori excluded water discharge. The total annualized costs of the RRN were minimized by the use of a mixed-integer nonlinear program (MINLP). This MINLP was based on a hyperstructure of the RRN and contained eight counterflow rinsing stages and three regenerator units: electrodialysis, reverse osmosis, and ion exchange columns. A "large-scale nickel plating process" case study showed that by means of zero-water discharge and optimized rinsing the total waste could be reduced by 90.4% at a revenue of $448,000/yr. Furthermore, with the optimized RRN, the rinsing performance can be improved significantly at a low-cost increase. In all the cases, the amount of valuable compounds reclaimed was above 99%.

  17. A spatially explicit whole-system model of the lignocellulosic bioethanol supply chain: an assessment of decentralised processing potential

    PubMed Central

    Dunnett, Alex J; Adjiman, Claire S; Shah, Nilay

    2008-01-01

    Background Lignocellulosic bioethanol technologies exhibit significant capacity for performance improvement across the supply chain through the development of high-yielding energy crops, integrated pretreatment, hydrolysis and fermentation technologies and the application of dedicated ethanol pipelines. The impact of such developments on cost-optimal plant location, scale and process composition within multiple plant infrastructures is poorly understood. A combined production and logistics model has been developed to investigate cost-optimal system configurations for a range of technological, system scale, biomass supply and ethanol demand distribution scenarios specific to European agricultural land and population densities. Results Ethanol production costs for current technologies decrease significantly from $0.71 to $0.58 per litre with increasing economies of scale, up to a maximum single-plant capacity of 550 × 106 l year-1. The development of high-yielding energy crops and consolidated bio-processing realises significant cost reductions, with production costs ranging from $0.33 to $0.36 per litre. Increased feedstock yields result in systems of eight fully integrated plants operating within a 500 × 500 km2 region, each producing between 1.24 and 2.38 × 109 l year-1 of pure ethanol. A limited potential for distributed processing and centralised purification systems is identified, requiring developments in modular, ambient pretreatment and fermentation technologies and the pipeline transport of pure ethanol. Conclusion The conceptual and mathematical modelling framework developed provides a valuable tool for the assessment and optimisation of the lignocellulosic bioethanol supply chain. In particular, it can provide insight into the optimal configuration of multiple plant systems. This information is invaluable in ensuring (near-)cost-optimal strategic development within the sector at the regional and national scale. The framework is flexible and can thus accommodate a range of processing tasks, logistical modes, by-product markets and impacting policy constraints. Significant scope for application to real-world case studies through dynamic extensions of the formulation has been identified. PMID:18662392

  18. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  19. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  20. Optimizing The DSSC Fabrication Process Using Lean Six Sigma

    NASA Astrophysics Data System (ADS)

    Fauss, Brian

    Alternative energy technologies must become more cost effective to achieve grid parity with fossil fuels. Dye sensitized solar cells (DSSCs) are an innovative third generation photovoltaic technology, which is demonstrating tremendous potential to become a revolutionary technology due to recent breakthroughs in cost of fabrication. The study here focused on quality improvement measures undertaken to improve fabrication of DSSCs and enhance process efficiency and effectiveness. Several quality improvement methods were implemented to optimize the seven step individual DSSC fabrication processes. Lean Manufacturing's 5S method successfully increased efficiency in all of the processes. Six Sigma's DMAIC methodology was used to identify and eliminate each of the root causes of defects in the critical titanium dioxide deposition process. These optimizations resulted with the following significant improvements in the production process: 1. fabrication time of the DSSCs was reduced by 54 %; 2. fabrication procedures were improved to the extent that all critical defects in the process were eliminated; 3. the quantity of functioning DSSCs fabricated was increased from 17 % to 90 %.

  1. Design optimization of space launch vehicles using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Bayley, Douglas James

    The United States Air Force (USAF) continues to have a need for assured access to space. In addition to flexible and responsive spacelift, a reduction in the cost per launch of space launch vehicles is also desirable. For this purpose, an investigation of the design optimization of space launch vehicles has been conducted. Using a suite of custom codes, the performance aspects of an entire space launch vehicle were analyzed. A genetic algorithm (GA) was employed to optimize the design of the space launch vehicle. A cost model was incorporated into the optimization process with the goal of minimizing the overall vehicle cost. The other goals of the design optimization included obtaining the proper altitude and velocity to achieve a low-Earth orbit. Specific mission parameters that are particular to USAF space endeavors were specified at the start of the design optimization process. Solid propellant motors, liquid fueled rockets, and air-launched systems in various configurations provided the propulsion systems for two, three and four-stage launch vehicles. Mass properties models, an aerodynamics model, and a six-degree-of-freedom (6DOF) flight dynamics simulator were all used to model the system. The results show the feasibility of this method in designing launch vehicles that meet mission requirements. Comparisons to existing real world systems provide the validation for the physical system models. However, the ability to obtain a truly minimized cost was elusive. The cost model uses an industry standard approach, however, validation of this portion of the model was challenging due to the proprietary nature of cost figures and due to the dependence of many existing systems on surplus hardware.

  2. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  3. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  4. TRU Waste Management Program cost/schedule optimization analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detamore, J.A.; Raudenbush, M.H.; Wolaver, R.W.

    1985-10-01

    The cost/schedule optimization task is a necessary function to insure that program goals and plans are optimized from a cost and schedule aspect. Results of this study will offer DOE information with which it can establish, within institutional constraints, the most efficient program for the long-term management and disposal of contact handled transuranic waste (CH-TRU). To this end, a comprehensive review of program cost/schedule tradeoffs has been made, to identify any major cost saving opportunities that may be realized by modification of current program plans. It was decided that all promising scenarios would be explored, and institutional limitations to implementationmore » would be described. Since a virtually limitless number of possible scenarios can be envisioned, it was necessary to distill these possibilities into a manageable number of alternatives. The resultant scenarios were described in the cost/schedule strategy and work plan document. Each scenario was compared with the base case: waste processing at the originating site; transport of CH-TRU wastes in TRUPACT; shipment of drums in 6-Packs; 25 year stored waste workoff; WIPP operational 10/88, with all sites shipping to WIPP beginning 10/88; and no processing at WIPP. Major savings were identified in two alternate scenarios: centralize waste processing at INEL and eliminate rail shipment of TRUPACT. No attempt was made to calculate savings due to combination of scenarios. 1 ref., 5 figs., 1 tab. (MHB)« less

  5. Microfiltration of thin stillage: Process simulation and economic analyses

    USDA-ARS?s Scientific Manuscript database

    In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...

  6. Designers' unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.

    1992-01-01

    The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  7. 28nm node process optimization: a lithography centric view

    NASA Astrophysics Data System (ADS)

    Seltmann, Rolf

    2014-10-01

    Many experts claim that the 28nm technology node will be the most cost effective technology node forever. This results from primarily from the cost of manufacturing due to the fact that 28nm is the last true Single Patterning (SP) node. It is also affected by the dramatic increase of design costs and the limited shrink factor of the next following nodes. Thus, it is assumed that this technology still will be alive still for many years. To be cost competitive, high yields are mandatory. Meanwhile, leading edge foundries have optimized the yield of the 28nm node to such a level that that it is nearly exclusively defined by random defectivity. However, it was a long way to go to come to that level. In my talk I will concentrate on the contribution of lithography to this yield learning curve. I will choose a critical metal patterning application. I will show what was needed to optimize the process window to a level beyond the usual OPC model work that was common on previous nodes. Reducing the process (in particular focus) variability is a complementary need. It will be shown which improvements were needed in tooling, process control and design-mask-wafer interaction to remove all systematic yield detractors. Over the last couple of years new scanner platforms were introduced that were targeted for both better productivity and better parametric performance. But this was not a clear run-path. It needed some extra affords of the tool suppliers together with the Fab to bring the tool variability down to the necessary level. Another important topic to reduce variability is the interaction of wafer none-planarity and lithography optimization. Having an accurate knowledge of within die topography is essential for optimum patterning. By completing both the variability reduction work and the process window enhancement work we were able to transfer the original marginal process budget to a robust positive budget and thus ensuring high yield and low costs.

  8. Application of Particle Swarm Optimization in Computer Aided Setup Planning

    NASA Astrophysics Data System (ADS)

    Kafashi, Sajad; Shakeri, Mohsen; Abedini, Vahid

    2011-01-01

    New researches are trying to integrate computer aided design (CAD) and computer aided manufacturing (CAM) environments. The role of process planning is to convert the design specification into manufacturing instructions. Setup planning has a basic role in computer aided process planning (CAPP) and significantly affects the overall cost and quality of machined part. This research focuses on the development for automatic generation of setups and finding the best setup plan in feasible condition. In order to computerize the setup planning process, three major steps are performed in the proposed system: a) Extraction of machining data of the part. b) Analyzing and generation of all possible setups c) Optimization to reach the best setup plan based on cost functions. Considering workshop resources such as machine tool, cutter and fixture, all feasible setups could be generated. Then the problem is adopted with technological constraints such as TAD (tool approach direction), tolerance relationship and feature precedence relationship to have a completely real and practical approach. The optimal setup plan is the result of applying the PSO (particle swarm optimization) algorithm into the system using cost functions. A real sample part is illustrated to demonstrate the performance and productivity of the system.

  9. An optimization modeling approach to awarding large fire support wildfire helicopter contracts from the US Forest Service

    Treesearch

    Stephanie A. Snyder; Keith D. Stockmann; Gaylord E. Morris

    2012-01-01

    The US Forest Service used contracted helicopter services as part of its wildfire suppression strategy. An optimization decision-modeling system was developed to assist in the contract selection process. Three contract award selection criteria were considered: cost per pound of delivered water, total contract cost, and quality ratings of the aircraft and vendors....

  10. GMOtrack: generator of cost-effective GMO testing strategies.

    PubMed

    Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana

    2009-01-01

    Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.

  11. Game Theory and Risk-Based Levee System Design

    NASA Astrophysics Data System (ADS)

    Hui, R.; Lund, J. R.; Madani, K.

    2014-12-01

    Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.

  12. An Industrial Engineering Approach to Cost Containment of Pharmacy Education.

    PubMed

    Duncan, Wendy; Bottenberg, Michelle; Chase, Marilea; Chesnut, Renae; Clarke, Cheryl; Schott, Kathryn; Torry, Ronald; Welty, Tim

    2015-11-25

    A 2-semester project explored employing teams of fourth-year industrial engineering students to optimize some of our academic management processes. Results included significant cost savings and increases in efficiency, effectiveness, and student and faculty satisfaction. While we did not adopt all of the students' recommendations, we did learn some important lessons. For example, an initial investment of time in developing a mutually clear understanding of the problems, constraints, and goals maximizes the value of industrial engineering analysis and recommendations. Overall, industrial engineering was a valuable tool for optimizing certain academic management processes.

  13. Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines

    PubMed Central

    Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing

    2014-01-01

    m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933

  14. Energy-saving management modelling and optimization for lead-acid battery formation process

    NASA Astrophysics Data System (ADS)

    Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.

    2017-11-01

    In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.

  15. Process and assembly plans for low cost commercial fuselage structure

    NASA Technical Reports Server (NTRS)

    Willden, Kurtis; Metschan, Stephen; Starkey, Val

    1991-01-01

    Cost and weight reduction for a composite structure is a result of selecting design concepts that can be built using efficient low cost manufacturing and assembly processes. Since design and manufacturing are inherently cost dependent, concurrent engineering in the form of a Design-Build Team (DBT) is essential for low cost designs. Detailed cost analysis from DBT designs and hardware verification must be performed to identify the cost drivers and relationships between design and manufacturing processes. Results from the global evaluation are used to quantitatively rank design, identify cost centers for higher ranking design concepts, define and prioritize a list of technical/economic issues and barriers, and identify parameters that control concept response. These results are then used for final design optimization.

  16. The small community solar thermal power experiment. Parabolic dish technology for industrial process heat application

    NASA Technical Reports Server (NTRS)

    Polzien, R. E.; Rodriguez, D.

    1981-01-01

    Aspects of incorporating a thermal energy transport system (ETS) into a field of parabolic dish collectors for industrial process heat (IPH) applications were investigated. Specific objectives are to: (1) verify the mathematical optimization of pipe diameters and insulation thicknesses calculated by a computer code; (2) verify the cost model for pipe network costs using conventional pipe network construction; (3) develop a design and the associated production costs for incorporating risers and downcomers on a low cost concentrator (LCC); (4) investigate the cost reduction of using unconventional pipe construction technology. The pipe network design and costs for a particular IPH application, specifically solar thermally enhanced oil recovery (STEOR) are analyzed. The application involves the hybrid operation of a solar powered steam generator in conjunction with a steam generator using fossil fuels to generate STEOR steam for wells. It is concluded that the STEOR application provides a baseline pipe network geometry used for optimization studies of pipe diameter and insulation thickness, and for development of comparative cost data, and operating parameters for the design of riser/downcomer modifications to the low cost concentrator.

  17. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems.

    PubMed

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.

  18. Analysis and optimization of hybrid electric vehicle thermal management systems

    NASA Astrophysics Data System (ADS)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  19. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems

    PubMed Central

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510

  20. Research on the optimization of quota design in real estate

    NASA Astrophysics Data System (ADS)

    Sun, Chunling; Ma, Susu; Zhong, Weichao

    2017-11-01

    Quota design is one of the effective methods of cost control in real estate development project and widely used in the current real estate development project to control the engineering construction cost, but quota design have many deficiencies in design process. For this purpose, this paper put forward a method to achieve investment control of real estate development project, which combine quota design and value engineering(VE) at the stage of design. Specifically, it’s an optimizing for the structure of quota design. At first, determine the design limits by investment estimate value, then using VE to carry on initial allocation of design limits and gain the functional target cost, finally, consider the whole life cycle cost (LCC) and operational problem in practical application to finish complex correction for the functional target cost. The improved process can control the project cost more effectively. It not only can control investment in a certain range, but also make the project realize maximum value within investment.

  1. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  2. [Cost-effectiveness of breast cancer screening policies in Mexico].

    PubMed

    Valencia-Mendoza, Atanacio; Sánchez-González, Gilberto; Bautista-Arredondo, Sergio; Torres-Mejía, Gabriela; Bertozzi, Stefano M

    2009-01-01

    Generate cost-effectiveness information to allow policy makers optimize breast cancer (BC) policy in Mexico. We constructed a Markov model that incorporates four interrelated processes of the disease: the natural history; detection using mammography; treatment; and other competing-causes mortality, according to which 13 different strategies were modeled. Strategies (starting age, % of coverage, frequency in years)= (48, 25, 2), (40, 50, 2) and (40, 50, 1) constituted the optimal method for expanding the BC program, yielding 75.3, 116.4 and 171.1 thousand pesos per life-year saved, respectively. The strategies included in the optimal method for expanding the program produce a cost per life-year saved of less than two times the GNP per capita and hence are cost-effective according to WHO Commission on Macroeconomics and Health criteria.

  3. Fairness in optimizing bus-crew scheduling process.

    PubMed

    Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei

    2017-01-01

    This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.

  4. Optimal control of malaria: combining vector interventions and drug therapies.

    PubMed

    Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B

    2018-04-24

    The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.

  5. Design optimization of aircraft landing gear assembly under dynamic loading

    NASA Astrophysics Data System (ADS)

    Wong, Jonathan Y. B.

    As development cycles and prototyping iterations begin to decrease in the aerospace industry, it is important to develop and improve practical methodologies to meet all design metrics. This research presents an efficient methodology that applies high-fidelity multi-disciplinary design optimization techniques to commercial landing gear assemblies, for weight reduction, cost savings, and structural performance dynamic loading. Specifically, a slave link subassembly was selected as the candidate to explore the feasibility of this methodology. The design optimization process utilized in this research was sectioned into three main stages: setup, optimization, and redesign. The first stage involved the creation and characterization of the models used throughout this research. The slave link assembly was modelled with a simplified landing gear test, replicating the behavior of the physical system. Through extensive review of the literature and collaboration with Safran Landing Systems, dynamic and structural behavior for the system were characterized and defined mathematically. Once defined, the characterized behaviors for the slave link assembly were then used to conduct a Multi-Body Dynamic (MBD) analysis to determine the dynamic and structural response of the system. These responses were then utilized in a topology optimization through the use of the Equivalent Static Load Method (ESLM). The results of the optimization were interpreted and later used to generate improved designs in terms of weight, cost, and structural performance under dynamic loading in stage three. The optimized designs were then validated using the model created for the MBD analysis of the baseline design. The design generation process employed two different approaches for post-processing the topology results produced. The first approach implemented a close replication of the topology results, resulting in a design with an overall peak stress increase of 74%, weight savings of 67%, and no apparent cost savings due to complex features present in the design. The second design approach focused on realizing reciprocating benefits for cost and weight savings. As a result, this design was able to achieve an overall peak stress increase of 6%, weight and cost savings of 36%, and 60%, respectively.

  6. Distributed Wind Competitiveness Improvement Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. Manufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. Thismore » fact sheet describes the CIP and funding awarded as part of the project.ufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. This fact sheet describes the CIP and funding awarded as part of the project.« less

  7. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  8. SU-D-BRC-02: Application of Six Sigma Approach to Improve the Efficiency of Patient-Specific QA in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LAH, J; Shin, D; Manger, R

    Purpose: To show how the Six Sigma DMAIC (Define-Measure-Analyze-Improve-Control) can be used for improving and optimizing the efficiency of patient-specific QA process by designing site-specific range tolerances. Methods: The Six Sigma tools (process flow diagram, cause and effect, capability analysis, Pareto chart, and control chart) were utilized to determine the steps that need focus for improving the patient-specific QA process. The patient-specific range QA plans were selected according to 7 treatment site groups, a total of 1437 cases. The process capability index, Cpm was used to guide the tolerance design of patient site-specific range. We also analyzed the financial impactmore » of this project. Results: Our results suggested that the patient range measurements were non-capable at the current tolerance level of ±1 mm in clinical proton plans. The optimized tolerances were calculated for treatment sites. Control charts for the patient QA time were constructed to compare QA time before and after the new tolerances were implemented. It is found that overall processing time was decreased by 24.3% after establishing new site-specific range tolerances. The QA failure for whole process in proton therapy would lead up to a 46% increase in total cost. This result can also predict how costs are affected by changes in adopting the tolerance design. Conclusion: We often believe that the quality and performance of proton therapy can easily be improved by merely tightening some or all of its tolerance requirements. This can become costly, however, and it is not necessarily a guarantee of better performance. The tolerance design is not a task to be undertaken without careful thought. The Six Sigma DMAIC can be used to improve the QA process by setting optimized tolerances. When tolerance design is optimized, the quality is reasonably balanced with time and cost demands.« less

  9. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  10. An Industrial Engineering Approach to Cost Containment of Pharmacy Education

    PubMed Central

    Bottenberg, Michelle; Chase, Marilea; Chesnut, Renae; Clarke, Cheryl; Schott, Kathryn; Torry, Ronald; Welty, Tim

    2015-01-01

    A 2-semester project explored employing teams of fourth-year industrial engineering students to optimize some of our academic management processes. Results included significant cost savings and increases in efficiency, effectiveness, and student and faculty satisfaction. While we did not adopt all of the students’ recommendations, we did learn some important lessons. For example, an initial investment of time in developing a mutually clear understanding of the problems, constraints, and goals maximizes the value of industrial engineering analysis and recommendations. Overall, industrial engineering was a valuable tool for optimizing certain academic management processes. PMID:26839421

  11. Cost-performance analysis of nutrient removal in a full-scale oxidation ditch process based on kinetic modeling.

    PubMed

    Li, Zheng; Qi, Rong; Wang, Bo; Zou, Zhe; Wei, Guohong; Yang, Min

    2013-01-01

    A full-scale oxidation ditch process for treating sewage was simulated with the ASM2d model and optimized for minimal cost with acceptable performance in terms of ammonium and phosphorus removal. A unified index was introduced by integrating operational costs (aeration energy and sludge production) with effluent violations for performance evaluation. Scenario analysis showed that, in comparison with the baseline (all of the 9 aerators activated), the strategy of activating 5 aerators could save aeration energy significantly with an ammonium violation below 10%. Sludge discharge scenario analysis showed that a sludge discharge flow of 250-300 m3/day (solid retention time (SRT), 13-15 days) was appropriate for the enhancement of phosphorus removal without excessive sludge production. The proposed optimal control strategy was: activating 5 rotating disks operated with a mode of "111100100" ("1" represents activation and "0" represents inactivation) for aeration and sludge discharge flow of 200 m3/day (SRT, 19 days). Compared with the baseline, this strategy could achieve ammonium violation below 10% and TP violation below 30% with substantial reduction of aeration energy cost (46%) and minimal increment of sludge production (< 2%). This study provides a useful approach for the optimization of process operation and control.

  12. [Cost management: the implementation of the activity-based costing method in sterile processing department].

    PubMed

    Jericó, Marli de Carvalho; Castilho, Valéria

    2010-09-01

    This exploratory case study was performed aiming at implementing the Activity-based Costing (ABC) method in a sterile processing department (SPD) of a major teaching hospital. Data collection was performed throughout 2006. Documentary research techniques and non participant closed observation were used. The ABC implementation allowed for learning the activity-based costing of both the chemical and physical disinfection cycle/load: (dollar 9.95) and (dollar 12.63), respectively; as well as the cost for sterilization by steam under pressure (autoclave) (dollar 31.37) and low temperature steam and gaseous formaldehyde sterilization (LTSF) (dollar 255.28). The information provided by the ABC method has optimized the overall understanding of the cost driver process and provided the foundation for assessing performance and improvement in the SPD processes.

  13. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  14. A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Rosu, Serban

    2011-09-01

    In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.

  15. On Asymptotic Behaviour and W 2, p Regularity of Potentials in Optimal Transportation

    NASA Astrophysics Data System (ADS)

    Liu, Jiakun; Trudinger, Neil S.; Wang, Xu-Jia

    2015-03-01

    In this paper we study local properties of cost and potential functions in optimal transportation. We prove that in a proper normalization process, the cost function is uniformly smooth and converges locally smoothly to a quadratic cost x · y, while the potential function converges to a quadratic function. As applications we obtain the interior W 2, p estimates and sharp C 1, α estimates for the potentials, which satisfy a Monge-Ampère type equation. The W 2, p estimate was previously proved by Caffarelli for the quadratic transport cost and the associated standard Monge-Ampère equation.

  16. A Low Cost Structurally Optimized Design for Diverse Filter Types

    PubMed Central

    Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar

    2016-01-01

    A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133

  17. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  18. Optimizing Teleportation Cost in Distributed Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh

    2018-03-01

    The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.

  19. Optimal design of reverse osmosis module networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maskan, F.; Wiley, D.E.; Johnston, L.P.M.

    2000-05-01

    The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less

  20. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE PAGES

    Paszyńska, A.; Paszyński, M.; Jopek, K.; ...

    2015-01-01

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  1. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paszyńska, A.; Paszyński, M.; Jopek, K.

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  2. Sizing a rainwater harvesting cistern by minimizing costs

    NASA Astrophysics Data System (ADS)

    Pelak, Norman; Porporato, Amilcare

    2016-10-01

    Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.

  3. Continuous Fiber Ceramic Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fareed, Ali; Craig, Phillip A.

    2002-09-01

    Fiber-reinforced ceramic composites demonstrate the high-temperature stability of ceramics--with an increased fracture toughness resulting from the fiber reinforcement of the composite. The material optimization performed under the continuous fiber ceramic composites (CFCC) included a series of systematic optimizations. The overall goals were to define the processing window, to increase the robustinous of the process, to increase process yield while reducing costs, and to define the complexity of parts that could be fabricated.

  4. Integrated Arrival and Departure Schedule Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2014-01-01

    In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.

  5. Optimality study of a gust alleviation system for light wing-loading STOL aircraft

    NASA Technical Reports Server (NTRS)

    Komoda, M.

    1976-01-01

    An analytical study was made of an optimal gust alleviation system that employs a vertical gust sensor mounted forward of an aircraft's center of gravity. Frequency domain optimization techniques were employed to synthesize the optimal filters that process the corrective signals to the flaps and elevator actuators. Special attention was given to evaluating the effectiveness of lead time, that is, the time by which relative wind sensor information should lead the actual encounter of the gust. The resulting filter is expressed as an implicit function of the prescribed control cost. A numerical example for a light wing loading STOL aircraft is included in which the optimal trade-off between performance and control cost is systematically studied.

  6. Cost model relationships between textile manufacturing processes and design details for transport fuselage elements

    NASA Technical Reports Server (NTRS)

    Metschan, Stephen L.; Wilden, Kurtis S.; Sharpless, Garrett C.; Andelman, Rich M.

    1993-01-01

    Textile manufacturing processes offer potential cost and weight advantages over traditional composite materials and processes for transport fuselage elements. In the current study, design cost modeling relationships between textile processes and element design details were developed. Such relationships are expected to help future aircraft designers to make timely decisions on the effect of design details and overall configurations on textile fabrication costs. The fundamental advantage of a design cost model is to insure that the element design is cost effective for the intended process. Trade studies on the effects of processing parameters also help to optimize the manufacturing steps for a particular structural element. Two methods of analyzing design detail/process cost relationships developed for the design cost model were pursued in the current study. The first makes use of existing databases and alternative cost modeling methods (e.g. detailed estimating). The second compares design cost model predictions with data collected during the fabrication of seven foot circumferential frames for ATCAS crown test panels. The process used in this case involves 2D dry braiding and resin transfer molding of curved 'J' cross section frame members having design details characteristic of the baseline ATCAS crown design.

  7. DoD Lead System Integrator (LSI) Transformation - Creating a Model Based Acquisition Framework (MBAF)

    DTIC Science & Technology

    2014-04-30

    cost to acquire systems as design maturity could be verified incrementally as the system was developed vice waiting for specific large “ big bang ...Framework (MBAF) be applied to simulate or optimize process variations on programs? LSI Roles and Responsibilities A review of the roles and...the model/process optimization process. It is the current intent that NAVAIR will use the model to run simulations on process changes in an attempt to

  8. Modified allocation capacitated planning model in blood supply chain management

    NASA Astrophysics Data System (ADS)

    Mansur, A.; Vanany, I.; Arvitrida, N. I.

    2018-04-01

    Blood supply chain management (BSCM) is a complex process management that involves many cooperating stakeholders. BSCM involves four echelon processes, which are blood collection or procurement, production, inventory, and distribution. This research develops an optimization model of blood distribution planning. The efficiency of decentralization and centralization policies in a blood distribution chain are compared, by optimizing the amount of blood delivered from a blood center to a blood bank. This model is developed based on allocation problem of capacitated planning model. At the first stage, the capacity and the cost of transportation are considered to create an initial capacitated planning model. Then, the inventory holding and shortage costs are added to the model. These additional parameters of inventory costs lead the model to be more realistic and accurate.

  9. Open pit mining profit maximization considering selling stage and waste rehabilitation cost

    NASA Astrophysics Data System (ADS)

    Muttaqin, B. I. A.; Rosyidi, C. N.

    2017-11-01

    In open pit mining activities, determination of the cut-off grade becomes crucial for the company since the cut-off grade affects how much profit will be earned for the mining company. In this study, we developed a cut-off grade determination mode for the open pit mining industry considering the cost of mining, waste removal (rehabilitation) cost, processing cost, fixed cost, and selling stage cost. The main goal of this study is to develop a model of cut-off grade determination to get the maximum total profit. Secondly, this study is also developed to observe the model of sensitivity based on changes in the cost components. The optimization results show that the models can help mining company managers to determine the optimal cut-off grade and also estimate how much profit that can be earned by the mining company. To illustrate the application of the models, a numerical example and a set of sensitivity analysis are presented. From the results of sensitivity analysis, we conclude that the changes in the sales price greatly affects the optimal cut-off value and the total profit.

  10. Designing for Cost

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.; Unal, Resit

    1991-01-01

    Designing for cost is a state of mind. Of course, a lot of technical knowledge is required and the use of appropriate tools will improve the process. Unfortunately, the extensive use of weight based cost estimating relationships has generated a perception in the aerospace community that the primary way to reduce cost is to reduce weight. Wrong! Based upon an approximation of an industry accepted formula, the PRICE H (tm) production-production equation, Dean demonstrated theoretically that the optimal trajectory for cost reduction is predominantly in the direction of system complexity reduction, not system weight reduction. Thus the phrase "keep it simple" is a primary state of mind required for reducing cost throughout the design process.

  11. Process Design and Techno-economic Analysis for Materials to Treat Produced Waters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heimer, Brandon Walter; Paap, Scott M; Sasan, Koroush

    Significant quantities of water are produced during enhanced oil recovery making these “produced water” streams attractive candidates for treatment and reuse. However, high concentrations of dissolved silica raise the propensity for fouling. In this paper, we report the design and economic analysis for a new ion exchange process using calcined hydrotalcite (HTC) to remove silica from water. This process improves upon known technologies by minimizing sludge product, reducing process fouling, and lowering energy use. Process modeling outputs included raw material requirements, energy use, and the minimum water treatment price (MWTP). Monte Carlo simulations quantified the impact of uncertainty and variabilitymore » in process inputs on MWTP. These analyses showed that cost can be significantly reduced if the HTC materials are optimized. Specifically, R&D improving HTC reusability, silica binding capacity, and raw material price can reduce MWTP by 40%, 13%, and 20%, respectively. Optimizing geographic deployment further improves cost competitiveness.« less

  12. Optimal teaching strategy in periodic impulsive knowledge dissemination system.

    PubMed

    Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang; Liu, Jian-Guo

    2017-01-01

    Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination.

  13. Optimal teaching strategy in periodic impulsive knowledge dissemination system

    PubMed Central

    Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang

    2017-01-01

    Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination. PMID:28665961

  14. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  15. Topology optimization of reduced rare-earth permanent magnet arrays with finite coercivity

    NASA Astrophysics Data System (ADS)

    Teyber, R.; Trevizoli, P. V.; Christiaanse, T. V.; Govindappa, P.; Rowe, A.

    2018-05-01

    The supply chain risk of rare-earth permanent magnets has yielded research efforts to improve both materials and magnetic circuits. While a number of magnet optimization techniques exist, literature has not incorporated the permanent magnet failure process stemming from finite coercivity. To address this, a mixed-integer topology optimization is formulated to maximize the flux density of a segmented Halbach cylinder while avoiding permanent demagnetization. The numerical framework is used to assess the efficacy of low-cost (rare-earth-free ferrite C9), medium-cost (rare-earth-free MnBi), and higher-cost (Dy-free NdFeB) permanent magnet materials. Novel magnet designs are generated that produce flux densities 70% greater than the segmented Halbach array, albeit with increased magnet mass. Three optimization formulations are then explored using ferrite C9 that demonstrates the trade-off between manufacturability and design sophistication, generating flux densities in the range of 0.366-0.483 T.

  16. Techno-economical evaluation of protein extraction for microalgae biorefinery

    NASA Astrophysics Data System (ADS)

    Sari, Y. W.; Sanders, J. P. M.; Bruins, M. E.

    2016-01-01

    Due to scarcity of fossil feedstocks, there is an increasing demand for biobased fuels. Microalgae are considered as promising biobased feedstocks. However, microalgae based fuels are not yet produced at large scale at present. Applying biorefinery, not only for oil, but also for other components, such as carbohydrates and protein, may lead to the sustainable and economical microalgae-based fuels. This paper discusses two relatively mild conditions for microalgal protein extraction, based on alkali and enzymes. Green microalgae (Chlorella fusca) with and without prior lipid removal were used as feedstocks. Under mild conditions, more protein could be extracted using proteases, with the highest yields for microalgae meal (without lipids). The data on protein extraction yields were used to calculate the costs for producing 1 ton of microalgal protein. The processing cost for the alkaline method was € 2448 /ton protein. Enzymatic method performed better from an economic point of view with € 1367 /ton protein on processing costs. However, this is still far from industrially feasible. For both extraction methods, biomass cost per ton of produced product were high. A higher protein extraction yield can partially solve this problem, lowering processing cost to €620 and 1180 /ton protein product, using alkali and enzyme, respectively. Although alkaline method has lower processing cost, optimization appears to be better achievable using enzymes. If the enzymatic method can be optimized by lowering the amount of alkali added, leading to processing cost of € 633/ton protein product. Higher revenue can be generated when the residue after protein extraction can be sold as fuel, or better as a highly digestible feed for cattle.

  17. Method and Process Development of Advanced Atmospheric Plasma Spraying for Thermal Barrier Coatings

    NASA Astrophysics Data System (ADS)

    Mihm, Sebastian; Duda, Thomas; Gruner, Heiko; Thomas, Georg; Dzur, Birger

    2012-06-01

    Over the last few years, global economic growth has triggered a dramatic increase in the demand for resources, resulting in steady rise in prices for energy and raw materials. In the gas turbine manufacturing sector, process optimizations of cost-intensive production steps involve a heightened potential of savings and form the basis for securing future competitive advantages in the market. In this context, the atmospheric plasma spraying (APS) process for thermal barrier coatings (TBC) has been optimized. A constraint for the optimization of the APS coating process is the use of the existing coating equipment. Furthermore, the current coating quality and characteristics must not change so as to avoid new qualification and testing. Using experience in APS and empirically gained data, the process optimization plan included the variation of e.g. the plasma gas composition and flow-rate, the electrical power, the arrangement and angle of the powder injectors in relation to the plasma jet, the grain size distribution of the spray powder and the plasma torch movement procedures such as spray distance, offset and iteration. In particular, plasma properties (enthalpy, velocity and temperature), powder injection conditions (injection point, injection speed, grain size and distribution) and the coating lamination (coating pattern and spraying distance) are examined. The optimized process and resulting coating were compared to the current situation using several diagnostic methods. The improved process significantly reduces costs and achieves the requirement of comparable coating quality. Furthermore, a contribution was made towards better comprehension of the APS of ceramics and the definition of a better method for future process developments.

  18. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  19. Computer-Aided Process Model For Carbon/Phenolic Materials

    NASA Technical Reports Server (NTRS)

    Letson, Mischell A.; Bunker, Robert C.

    1996-01-01

    Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.

  20. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  1. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  2. Energy Center Structure Optimization by using Smart Technologies in Process Control System

    NASA Astrophysics Data System (ADS)

    Shilkina, Svetlana V.

    2018-03-01

    The article deals with practical application of fuzzy logic methods in process control systems. A control object - agroindustrial greenhouse complex, which includes its own energy center - is considered. The paper analyzes object power supply options taking into account connection to external power grids and/or installation of own power generating equipment with various layouts. The main problem of a greenhouse facility basic process is extremely uneven power consumption, which forces to purchase redundant generating equipment idling most of the time, which quite negatively affects project profitability. Energy center structure optimization is largely based on solving the object process control system construction issue. To cut investor’s costs it was proposed to optimize power consumption by building an energy-saving production control system based on a fuzzy logic controller. The developed algorithm of automated process control system functioning ensured more even electric and thermal energy consumption, allowed to propose construction of the object energy center with a smaller number of units due to their more even utilization. As a result, it is shown how practical use of microclimate parameters fuzzy control system during object functioning leads to optimization of agroindustrial complex energy facility structure, which contributes to a significant reduction in object construction and operation costs.

  3. Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture.

    PubMed

    Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S

    2014-10-01

    This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  4. Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture

    PubMed Central

    Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S

    2014-01-01

    BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115

  5. REopt: A Platform for Energy System Integration and Optimization: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkins, T.; Cutler, D.; Anderson, K.

    2014-08-01

    REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, andmore » energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.« less

  6. Guidebook for solar process-heat applications

    NASA Astrophysics Data System (ADS)

    Fazzolare, R.; Mignon, G.; Campoy, L.; Luttmann, F.

    1981-01-01

    The potential for solar process heat in Arizona and some of the general technical aspects of solar, such as insolation, siting, and process analysis are explored. Major aspects of a solar plant design are presented. Collectors, storage, and heat exchange are discussed. Reducing hardware costs to annual dollar benefits is also discussed. Rate of return, cash flow, and payback are discussed as they relate to solar systems. Design analysis procedures are presented. The design cost optimization techniques using a yearly computer simulation of a solar process operation is demonstrated.

  7. The cost and management of different types of clinical mastitis in dairy cows estimated by dynamic programming.

    PubMed

    Cha, E; Bar, D; Hertl, J A; Tauer, L W; Bennett, G; González, R N; Schukken, Y H; Welcome, F L; Gröhn, Y T

    2011-09-01

    The objective of this study was to estimate the cost of 3 different types of clinical mastitis (CM) (caused by gram-positive bacteria, gram-negative bacteria, and other organisms) at the individual cow level and thereby identify the economically optimal management decision for each type of mastitis. We made modifications to an existing dynamic optimization and simulation model, studying the effects of various factors (incidence of CM, milk loss, pregnancy rate, and treatment cost) on the cost of different types of CM. The average costs per case (US$) of gram-positive, gram-negative, and other CM were $133.73, $211.03, and $95.31, respectively. This model provided a more informed decision-making process in CM management for optimal economic profitability and determined that 93.1% of gram-positive CM cases, 93.1% of gram-negative CM cases, and 94.6% of other CM cases should be treated. The main contributor to the total cost per case was treatment cost for gram-positive CM (51.5% of the total cost per case), milk loss for gram-negative CM (72.4%), and treatment cost for other CM (49.2%). The model affords versatility as it allows for parameters such as production costs, economic values, and disease frequencies to be altered. Therefore, cost estimates are the direct outcome of the farm-specific parameters entered into the model. Thus, this model can provide farmers economically optimal guidelines specific to their individual cows suffering from different types of CM. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Mathematical simulation and optimization of cutting mode in turning of workpieces made of nickel-based heat-resistant alloy

    NASA Astrophysics Data System (ADS)

    Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.

    2018-05-01

    A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.

  9. Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.

    PubMed

    Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha

    2012-11-30

    Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. When Optimal Feedback Control Is Not Enough: Feedforward Strategies Are Required for Optimal Control with Active Sensing.

    PubMed

    Yeo, Sang-Hoon; Franklin, David W; Wolpert, Daniel M

    2016-12-01

    Movement planning is thought to be primarily determined by motor costs such as inaccuracy and effort. Solving for the optimal plan that minimizes these costs typically leads to specifying a time-varying feedback controller which both generates the movement and can optimally correct for errors that arise within a movement. However, the quality of the sensory feedback during a movement can depend substantially on the generated movement. We show that by incorporating such state-dependent sensory feedback, the optimal solution incorporates active sensing and is no longer a pure feedback process but includes a significant feedforward component. To examine whether people take into account such state-dependency in sensory feedback we asked people to make movements in which we controlled the reliability of sensory feedback. We made the visibility of the hand state-dependent, such that the visibility was proportional to the component of hand velocity in a particular direction. Subjects gradually adapted to such a sensory perturbation by making curved hand movements. In particular, they appeared to control the late visibility of the movement matching predictions of the optimal controller with state-dependent sensory noise. Our results show that trajectory planning is not only sensitive to motor costs but takes sensory costs into account and argues for optimal control of movement in which feedforward commands can play a significant role.

  11. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  12. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  13. Assessing the economics of processing end-of-life vehicles through manual dismantling.

    PubMed

    Tian, Jin; Chen, Ming

    2016-10-01

    Most dismantling enterprises in a number of developing countries, such as China, usually adopt the "manual+mechanical" dismantling approach to process end-of-life vehicles. However, the automobile industry does not have a clear indicator to reasonably and effectively determine the manual dismantling degree for end-of-life vehicles. In this study, five different dismantling scenarios and an economic system for end-of-life vehicles were developed based on the actual situation of end-of-life vehicles. The fuzzy analytic hierarchy process was applied to set the weights of direct costs, indirect costs, and sales and to obtain an optimal manual dismantling scenario. Results showed that although the traditional method of "dismantling to the end" can guarantee the highest recycling rate, this method is not the best among all the scenarios. The profit gained in the optimal scenario is 100.6% higher than that in the traditional scenario. The optimal manual dismantling scenario showed that enterprises are required to select suitable parts to process through manual dismantling. Selecting suitable parts maximizes economic profit and improves dismantling speed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Technology-design-manufacturing co-optimization for advanced mobile SoCs

    NASA Astrophysics Data System (ADS)

    Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey

    2014-03-01

    How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.

  15. Determination of Optimal Subsidy for Materials Saving Investment through Recycle/Recovery at Industrial Level

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2009-08-01

    This work deals with a methodological framework under the form of a simple/short algorithmic procedure (including 11 activity steps and 3 decision nodes) designed/developed for the determination of optimal subsidy for materials saving investment through recycle/recovery (RR) at industrial level. Two case examples are presented, covering both aspects, without and with recycling. The expected Relative Cost Decrease (RCD) because of recycling, which forms a critical index for decision making on subsidizing, is estimated. The developed procedure can be extended outside the industrial unit to include collection/transportation/processing of recyclable wasted products. Since, in such a case, transportation cost and processing cost are conflict depended variables (when the quantity collected/processed Q is the independent/explanatory variable), the determination of Qopt is examined under energy crises conditions, when corresponding subsidies might be granted to re-set the original equilibrium and avoid putting the recycling enterprise in jeopardize due to dangerous lowering of the first break-even point.

  16. Time-based management of patient processes.

    PubMed

    Kujala, Jaakko; Lillrank, Paul; Kronström, Virpi; Peltokorpi, Antti

    2006-01-01

    The purpose of this paper is to present a conceptual framework that would enable the effective application of time based competition (TBC) and work in process (WIP) concepts in the design and management of effective and efficient patient processes. This paper discusses the applicability of time-based competition and work-in-progress concepts to the design and management of healthcare service production processes. A conceptual framework is derived from the analysis of both existing research and empirical case studies. The paper finds that a patient episode is analogous to a customer order-to-delivery chain in industry. The effective application of TBC and WIP can be achieved by focusing on through put time of a patient episode by reducing the non-value adding time components and by minimizing time categories that are main cost drivers for all stakeholders involved in the patient episode. The paper shows that an application of TBC in managing patient processes can be limited if there is no consensus about optimal care episode in the medical community. In the paper it is shown that managing patient processes based on time and cost analysis enables one to allocate the optimal amount of resources, which would allow a healthcare system to minimize the total cost of specific episodes of illness. Analysing the total cost of patient episodes can provide useful information in the allocation of limited resources among multiple patient processes. This paper introduces a framework for health care managers and researchers to analyze the effect of reducing through put time to the total cost of patient episodes.

  17. Treatment of an actual slaughterhouse wastewater by integration of biological and advanced oxidation processes: Modeling, optimization, and cost-effectiveness analysis.

    PubMed

    Bustillo-Lecompte, Ciro Fernando; Mehrvar, Mehrab

    2016-11-01

    Biological and advanced oxidation processes are combined to treat an actual slaughterhouse wastewater (SWW) by a sequence of an anaerobic baffled reactor, an aerobic activated sludge reactor, and a UV/H2O2 photoreactor with recycle in continuous mode at laboratory scale. In the first part of this study, quadratic modeling along with response surface methodology are used for the statistical analysis and optimization of the combined process. The effects of the influent total organic carbon (TOC) concentration, the flow rate, the pH, the inlet H2O2 concentration, and their interaction on the overall treatment efficiency, CH4 yield, and H2O2 residual in the effluent of the photoreactor are investigated. The models are validated at different operating conditions using experimental data. Maximum TOC and total nitrogen (TN) removals of 91.29 and 86.05%, respectively, maximum CH4 yield of 55.72%, and minimum H2O2 residual of 1.45% in the photoreactor effluent were found at optimal operating conditions. In the second part of this study, continuous distribution kinetics is applied to establish a mathematical model for the degradation of SWW as a function of time. The agreement between model predictions and experimental values indicates that the proposed model could describe the performance of the combined anaerobic-aerobic-UV/H2O2 processes for the treatment of SWW. In the final part of the study, the optimized combined anaerobic-aerobic-UV/H2O2 processes with recycle were evaluated using a cost-effectiveness analysis to minimize the retention time, the electrical energy consumption, and the overall incurred treatment costs required for the efficient treatment of slaughterhouse wastewater effluents. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Production of fatty acid butyl esters using the low cost naturally immobilized Carica papaya lipase.

    PubMed

    Su, Erzheng; Wei, Dongzhi

    2014-07-09

    In this work, the low cost naturally immobilized Carica papaya lipase (CPL) was investigated for production of fatty acid butyl esters (FABE) to fulfill the aim of reducing the lipase cost in the enzymatic butyl-biodiesel process. The CPL showed specificities to different alcohol acyl acceptors. Alcohols with more than three carbon atoms did not have negative effects on the CPL activity. The CPL catalyzed butanolysis for FABE production was systematically investigated. The reaction solvent, alcohol/oil molar ratio, enzyme amount, reaction temperature, and water activity all affected the butanolysis process. Under the optimized conditions, the highest conversion of 96% could be attained in 24 h. These optimal conditions were further applied to CPL catalyzed butanolysis of other vegetable oils. All of them showed very high conversion. The CPL packed-bed reactor was further developed, and could be operated continuously for more than 150 h. All of these results showed that the low cost Carica papaya lipase can be used as a promising lipase for biodiesel production.

  19. Instrumentation for optimizing an underground coal-gasification process

    NASA Astrophysics Data System (ADS)

    Seabaugh, W.; Zielinski, R. E.

    1982-06-01

    While the United States has a coal resource base of 6.4 trillion tons, only seven percent is presently recoverable by mining. The process of in-situ gasification can recover another twenty-eight percent of the vast resource, however, viable technology must be developed for effective in-situ recovery. The key to this technology is system that can optimize and control the process in real-time. An instrumentation system is described that optimizes the composition of the injection gas, controls the in-situ process and conditions the product gas for maximum utilization. The key elements of this system are Monsanto PRISM Systems, a real-time analytical system, and a real-time data acquisition and control system. This system provides from complete automation of the process but can easily be overridden by manual control. The use of this cost effective system can provide process optimization and is an effective element in developing a viable in-situ technology.

  20. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    NASA Astrophysics Data System (ADS)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  1. Supply chain optimization for pediatric perioperative departments.

    PubMed

    Davis, Janice L; Doyle, Robert

    2011-09-01

    Economic challenges compel pediatric perioperative departments to reduce nonlabor supply costs while maintaining the quality of patient care. Optimization of the supply chain introduces a framework for decision making that drives fiscally responsible decisions. The cost-effective supply chain is driven by implementing a value analysis process for product selection, being mindful of product sourcing decisions to reduce supply expense, creating logistical efficiency that will eliminate redundant processes, and managing inventory to ensure product availability. The value analysis approach is an analytical methodology for product selection that involves product evaluation and recommendation based on consideration of clinical benefit, overall financial impact, and revenue implications. Copyright © 2011 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  2. Seeking the competitive advantage: it's more than cost reduction.

    PubMed

    South, S F

    1999-01-01

    Most organizations focus considerable time and energy on reducing operating costs as a way to attain marketplace advantage. This strategy was not inappropriate in the past. To be competitive in the future, however, focus must be placed on other issues, not just cost reduction. The near future will be dominated by service industries, knowledge management, and virtual partnerships, with production optimization and flexibility, innovation, and strong partnerships defining those organizations that attain competitive advantage. Competitive advantage will reside in clarifying the vision and strategic plan, reviewing and redesigning work processes to optimize resources and value-added work, and creating change-ready environments and empowered workforces.

  3. Process model and economic analysis of ethanol production from sugar beet raw juice as part of the cleaner production concept.

    PubMed

    Vučurović, Damjan G; Dodić, Siniša N; Popov, Stevan D; Dodić, Jelena M; Grahovac, Jovana A

    2012-01-01

    The batch fermentation process of sugar beet processing intermediates by free yeast cells is the most widely used method in the Autonomous Province of Vojvodina for producing ethanol as fuel. In this study a process and cost model was developed for producing ethanol from raw juice. The model can be used to calculate capital investment costs, unit production costs and operating costs for a plant producing 44 million l of 99.6% pure ethanol annually. In the sensitivity analysis the influence of sugar beet and yeast price, as well as the influence of recycled biomass on process economics, ethanol production costs and project feasibility was examined. The results of this study clearly demonstrate that the raw material costs have a significant influence on the expenses for producing ethanol. Also, the optimal percentage of recycled biomass turned out to be in the range from 50% to 70%. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. An updated comprehensive techno-economic analysis of algae biodiesel.

    PubMed

    Nagarajan, Sanjay; Chou, Siaw Kiang; Cao, Shenyan; Wu, Chen; Zhou, Zhi

    2013-10-01

    Algae biodiesel is a promising but expensive alternative fuel to petro-diesel. To overcome cost barriers, detailed cost analyses are needed. A decade-old cost analysis by the U.S. National Renewable Energy Laboratory indicated that the costs of algae biodiesel were in the range of $0.53-0.85/L (2012 USD values). However, the cost of land and transesterification were just roughly estimated. In this study, an updated comprehensive techno-economic analysis was conducted with optimized processes and improved cost estimations. Latest process improvement, quotes from vendors, government databases, and other relevant data sources were used to calculate the updated algal biodiesel costs, and the final costs of biodiesel are in the range of $0.42-0.97/L. Additional improvements on cost-effective biodiesel production around the globe to cultivate algae was also recommended. Overall, the calculated costs seem promising, suggesting that a single step biodiesel production process is close to commercial reality. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Sail Plan Configuration Optimization for a Modern Clipper Ship

    NASA Astrophysics Data System (ADS)

    Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz

    2002-11-01

    We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).

  6. Optimizing model: insemination, replacement, seasonal production, and cash flow.

    PubMed

    DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A

    1992-03-01

    Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.

  7. Manipulation and handling processes off-line programming and optimization with use of K-Roset

    NASA Astrophysics Data System (ADS)

    Gołda, G.; Kampa, A.

    2017-08-01

    Contemporary trends in development of efficient, flexible manufacturing systems require practical implementation of modern “Lean production” concepts for maximizing customer value through minimizing all wastes in manufacturing and logistics processes. Every FMS is built on the basis of automated and robotized production cells. Except flexible CNC machine tools and other equipments, the industrial robots are primary elements of the system. In the studies, authors look for wastes of time and cost in real tasks of robots, during manipulation processes. According to aspiration for optimization of handling and manipulation processes with use of the robots, the application of modern off-line programming methods and computer simulation, is the best solution and it is only way to minimize unnecessary movements and other instructions. The modelling process of robotized production cell and offline programming of Kawasaki robots in AS-Language will be described. The simulation of robotized workstation will be realized with use of virtual reality software K-Roset. Authors show the process of industrial robot’s programs improvement and optimization in terms of minimizing the number of useless manipulator movements and unnecessary instructions. This is realized in order to shorten the time of production cycles. This will also reduce costs of handling, manipulations and technological process.

  8. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  9. Optimal environmental management strategy and implementation for groundwater contamination prevention and restoration.

    PubMed

    Wang, Mingyu

    2006-04-01

    An innovative management strategy is proposed for optimized and integrated environmental management for regional or national groundwater contamination prevention and restoration allied with consideration of sustainable development. This management strategy accounts for availability of limited resources, human health and ecological risks from groundwater contamination, costs for groundwater protection measures, beneficial uses and values from groundwater protection, and sustainable development. Six different categories of costs are identified with regard to groundwater prevention and restoration. In addition, different environmental impacts from groundwater contamination including human health and ecological risks are individually taken into account. System optimization principles are implemented to accomplish decision-makings on the optimal resources allocations of the available resources or budgets to different existing contaminated sites and projected contamination sites for a maximal risk reduction. Established management constraints such as budget limitations under different categories of costs are satisfied at the optimal solution. A stepwise optimization process is proposed in which the first step is to select optimally a limited number of sites where remediation or prevention measures will be taken, from all the existing contaminated and projected contamination sites, based on a total regionally or nationally available budget in a certain time frame such as 10 years. Then, several optimization steps determined year-by-year optimal distributions of the available yearly budgets for those selected sites. A hypothetical case study is presented to demonstrate a practical implementation of the management strategy. Several issues pertaining to groundwater contamination exposure and risk assessments and remediation cost evaluations are briefly discussed for adequately understanding implementations of the management strategy.

  10. Automated system of devising and choosing economically effective technological processes of heat treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinin, V.P.; Tkacheva, O.N.

    1986-03-01

    Heat treatment entails considerable expenditure of power and often requires expensive equipment. One of the fundamental problems arising in the elaboration of heat treatment technology is the selection of the economically optimal process, which also has to ensure the quality of finished parts required by the customer. To correctly determine the expenditures on the basic kinds of resources it is necessary to improve the methods of calculating prime costs and to carry out such a calculation at the earliest stages of the technological preparation of production. A new method of optimizing synthesis of the structure of devising technological processes ofmore » heat treatment using the achievements of cybernetics and the possibilities of computerization is examined in this article. The method makes it possible to analyze in detail the economy of all possible variants of a technological process when one parameter is changed, without recalculating all items of prime cost.« less

  11. Techno-economic analysis of the industrial production of a low-cost enzyme using E. coli: the case of recombinant β-glucosidase.

    PubMed

    Ferreira, Rafael da Gama; Azzoni, Adriano Rodrigues; Freitas, Sindelia

    2018-01-01

    The enzymatic conversion of lignocellulosic biomass into fermentable sugars is a promising approach for producing renewable fuels and chemicals. However, the cost and efficiency of the fungal enzyme cocktails that are normally employed in these processes remain a significant bottleneck. A potential route to increase hydrolysis yields and thereby reduce the hydrolysis costs would be to supplement the fungal enzymes with their lacking enzymatic activities, such as β-glucosidase. In this context, it is not clear from the literature whether recombinant E. coli could be a cost-effective platform for the production of some of these low-value enzymes, especially in the case of on-site production. Here, we present a conceptual design and techno-economic evaluation of the production of a low-cost industrial enzyme using recombinant E. coli . In a simulated baseline scenario for β-glucosidase demand in a hypothetical second-generation ethanol (2G) plant in Brazil, we found that the production cost (316 US$/kg) was higher than what is commonly assumed in the literature for fungal enzymes, owing especially to the facility-dependent costs (45%) and to consumables (23%) and raw materials (25%). Sensitivity analyses of process scale, inoculation volume, and volumetric productivity indicated that optimized conditions may promote a dramatic reduction in enzyme cost and also revealed the most relevant factors affecting production costs. Despite the considerable technical and economic uncertainties that surround 2G ethanol and the large-scale production of low-cost recombinant enzymes, this work sheds light on some relevant questions and supports future studies in this field. In particular, we conclude that process optimization, on many fronts, may strongly reduce the costs of E. coli recombinant enzymes, in the context of tailor-made enzymatic cocktails for 2G ethanol production.

  12. Simulation of product distribution at PT Anugrah Citra Boga by using capacitated vehicle routing problem method

    NASA Astrophysics Data System (ADS)

    Lamdjaya, T.; Jobiliong, E.

    2017-01-01

    PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.

  13. IEEE 802.21 Assisted Seamless and Energy Efficient Handovers in Mixed Networks

    NASA Astrophysics Data System (ADS)

    Liu, Huaiyu; Maciocco, Christian; Kesavan, Vijay; Low, Andy L. Y.

    Network selection is the decision process for a mobile terminal to handoff between homogeneous or heterogeneous networks. With multiple available networks, the selection process must evaluate factors like network services/conditions, monetary cost, system conditions, user preferences etc. In this paper, we investigate network selection using a cost function and information provided by IEEE 802.21. The cost function provides flexibility to balance different factors in decision making and our research is focused on improving both seamlessness and energy efficiency of handovers. Our solution is evaluated using real WiFi, WiMax, and 3G signal strength traces. The results show that appropriate networks were selected based on selection policies, handovers were triggered at optimal times to increase overall network connectivity as compared to traditional triggering schemes, while at the same time the energy consumption of multi-radio devices for both on-going operations as well as during handovers is optimized.

  14. High-density plasma deposition manufacturing productivity improvement

    NASA Astrophysics Data System (ADS)

    Olmer, Leonard J.; Hudson, Chris P.

    1999-09-01

    High Density Plasma (HDP) deposition provides a means to deposit high quality dielectrics meeting submicron gap fill requirements. But, compared to traditional PECVD processing, HDP is relatively expensive due to the higher capital cost of the equipment. In order to keep processing costs low, it became necessary to maximize the wafer throughput of HDP processing without degrading the film properties. The approach taken was to optimize the post deposition microwave in-situ clean efficiency. A regression model, based on actual data, indicated that number of wafers processed before a chamber clean was the dominant factor. Furthermore, a design change in the ceramic hardware, surrounding the electrostatic chuck, provided thermal isolation resulting in an enhanced clean rate of the chamber process kit. An infra-red detector located in the chamber exhaust line provided a means to endpoint the clean and in-film particle data confirmed the infra-red results. The combination of increased chamber clean frequency, optimized clean time and improved process.

  15. Optimizing Value and Avoiding Problems in Building Schools.

    ERIC Educational Resources Information Center

    Brevard County School Board, Cocoa, FL.

    This report describes school design and construction delivery processes used by the School Board of Brevard County (Cocoa, Florida) that help optimize value, avoid problems, and eliminate the cost of maintaining a large facility staff. The project phases are examined from project definition through design to construction. Project delivery…

  16. Gaalas/Gaas Solar Cell Process Study

    NASA Technical Reports Server (NTRS)

    Almgren, D. W.; Csigi, K. I.

    1980-01-01

    Available information on liquid phase, vapor phase (including chemical vapor deposition) and molecular beam epitaxy growth procedures that could be used to fabricate single crystal, heteroface, (AlGa) As/GaAs solar cells, for space applications is summarized. A comparison of the basic cost elements of the epitaxy growth processes shows that the current infinite melt LPE process has the lower cost per cell for an annual production rate of 10,000 cells. The metal organic chemical vapor deposition (MO-CVD) process has the potential for low cost production of solar cells but there is currently a significant uncertainty in process yield, i.e., the fraction of active material in the input gas stream that ends up in the cell. Additional work is needed to optimize and document the process parameters for the MO-CVD process.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junhua Jiang; Ted Aulich

    An electrolytic renewable nitrogen fertilizer process that utilizes wind-generated electricity, N{sub 2} extracted from air, and syngas produced via the gasification of biomass to produce nitrogen fertilizer ammonia was developed at the University of North Dakota Energy & Environmental Research Center. This novel process provides an important way to directly utilize biosyngas generated mainly via the biomass gasification in place of the high-purity hydrogen which is required for Haber Bosch-based production of the fertilizer for the production of the widely used nitrogen fertilizers. Our preliminary economic projection shows that the economic competitiveness of the electrochemical nitrogen fertilizer process strongly dependsmore » upon the cost of hydrogen gas and the cost of electricity. It is therefore expected the cost of nitrogen fertilizer production could be considerably decreased owing to the direct use of cost-effective 'hydrogen-equivalent' biosyngas compared to the high-purity hydrogen. The technical feasibility of the electrolytic process has been proven via studying ammonia production using humidified carbon monoxide as the hydrogen-equivalent vs. the high-purity hydrogen. Process optimization efforts have been focused on the development of catalysts for ammonia formation, electrolytic membrane systems, and membrane-electrode assemblies. The status of the electrochemical ammonia process is characterized by a current efficiency of 43% using humidified carbon monoxide as a feedstock to the anode chamber and a current efficiency of 56% using high-purity hydrogen as the anode gas feedstock. Further optimization of the electrolytic process for higher current efficiency and decreased energy consumption is ongoing at the EERC.« less

  18. Noun-phrase anaphors and focus: the informational load hypothesis.

    PubMed

    Almor, A

    1999-10-01

    The processing of noun-phrase (NP) anaphors in discourse is argued to reflect constraints on the activation and processing of semantic information in working memory. The proposed theory views NP anaphor processing as an optimization process that is based on the principle that processing cost, defined in terms of activating semantic information, should serve some discourse function--identifying the antecedent, adding new information, or both. In a series of 5 self-paced reading experiments, anaphors' functionality was manipulated by changing the discourse focus, and their cost was manipulated by changing the semantic relation between the anaphors and their antecedents. The results show that reading times of NP anaphors reflect their functional justification: Anaphors were read faster when their cost had a better functional justification. These results are incompatible with any theory that treats NP anaphors as one homogeneous class regardless of discourse function and processing cost.

  19. The cost of different types of lameness in dairy cows calculated by dynamic programming.

    PubMed

    Cha, E; Hertl, J A; Bar, D; Gröhn, Y T

    2010-10-01

    Traditionally, studies which placed a monetary value on the effect of lameness have calculated the costs at the herd level and rarely have they been specific to different types of lameness. These costs which have been calculated from former studies are not particularly useful for farmers in making economically optimal decisions depending on individual cow characteristics. The objective of this study was to calculate the cost of different types of lameness at the individual cow level and thereby identify the optimal management decision for each of three representative lameness diagnoses. This model would provide a more informed decision making process in lameness management for maximal economic profitability. We made modifications to an existing dynamic optimization and simulation model, studying the effects of various factors (incidence of lameness, milk loss, pregnancy rate and treatment cost) on the cost of different types of lameness. The average cost per case (US$) of sole ulcer, digital dermatitis and foot rot were 216.07, 132.96 and 120.70, respectively. It was recommended that 97.3% of foot rot cases, 95.5% of digital dermatitis cases and 92.3% of sole ulcer cases be treated. The main contributor to the total cost per case of sole ulcer was milk loss (38%), treatment cost for digital dermatitis (42%) and the effect of decreased fertility for foot rot (50%). This model affords versatility as it allows for parameters such as production costs, economic values and disease frequencies to be altered. Therefore, cost estimates are the direct outcome of the farm specific parameters entered into the model. Thus, this model can provide farmers economically optimal guidelines specific to their individual cows suffering from different types of lameness. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.

  1. Using activity-based costing to track resource use in group practices.

    PubMed

    Zeller, T L; Siegel, G; Kaciuba, G; Lau, A H

    1999-09-01

    Research shows that understanding how resources are consumed can help group practices control costs. An American Academy of Orthopaedic Surgeons study used an activity-based costing (ABC) system to measure how resources are consumed in providing medical services. Teams of accounting professors observed 18 diverse orthopedic surgery practices. The researchers identified 17 resource-consuming business processes performed by nonphysician office staff. They measured resource consumption by assigning costs to each process according to how much time is spent on related work activities. When group practices understand how their resources are being consumed, they can reduce costs and optimize revenues by making adjustments in how administrative and clinical staff work.

  2. Modeling and optimization by particle swarm embedded neural network for adsorption of zinc (II) by palm kernel shell based activated carbon from aqueous environment.

    PubMed

    Karri, Rama Rao; Sahu, J N

    2018-01-15

    Zn (II) is one the common pollutant among heavy metals found in industrial effluents. Removal of pollutant from industrial effluents can be accomplished by various techniques, out of which adsorption was found to be an efficient method. Applications of adsorption limits itself due to high cost of adsorbent. In this regard, a low cost adsorbent produced from palm oil kernel shell based agricultural waste is examined for its efficiency to remove Zn (II) from waste water and aqueous solution. The influence of independent process variables like initial concentration, pH, residence time, activated carbon (AC) dosage and process temperature on the removal of Zn (II) by palm kernel shell based AC from batch adsorption process are studied systematically. Based on the design of experimental matrix, 50 experimental runs are performed with each process variable in the experimental range. The optimal values of process variables to achieve maximum removal efficiency is studied using response surface methodology (RSM) and artificial neural network (ANN) approaches. A quadratic model, which consists of first order and second order degree regressive model is developed using the analysis of variance and RSM - CCD framework. The particle swarm optimization which is a meta-heuristic optimization is embedded on the ANN architecture to optimize the search space of neural network. The optimized trained neural network well depicts the testing data and validation data with R 2 equal to 0.9106 and 0.9279 respectively. The outcomes indicates that the superiority of ANN-PSO based model predictions over the quadratic model predictions provided by RSM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Scheduling structural health monitoring activities for optimizing life-cycle costs and reliability of wind turbines

    NASA Astrophysics Data System (ADS)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2017-04-01

    Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.

  4. Make or buy analysis model based on tolerance allocation to minimize manufacturing cost and fuzzy quality loss

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.

    2016-02-01

    The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.

  5. Approach to Low-Cost High-Efficiency OLED Lighting. Building Technologies Solid State Lighting (SSL) Program Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Qibing

    2017-10-06

    This project developed an integrated substrate which organic light emitting diode (OLED) panel developers could employ the integrated substrate to fabricate OLED devices with performance and projected cost meeting the MYPP targets of the Solid State Lighting Program of the Department of Energy. The project optimized the composition and processing conditions of the integrated substrate for OLED light extraction efficiency and overall performance. The process was further developed for scale up to a low-cost process and fabrication of prototype samples. The encapsulation of flexible OLEDs based on this integrated substrate was also investigated using commercial flexible barrier films.

  6. HYNOL PROCESS EVALUATION

    EPA Science Inventory

    The report examines process alternatives for the optimal use of natural gas and biomass for production of fuel-cell vehicle fuel, emphasizing maximum displacement of petroleum and maximum reduction of overall fuel-cycle carbon dioxide (CO2) emissions at least cost. Three routes a...

  7. Feasibility study of RFID technology for construction load tracking.

    DOT National Transportation Integrated Search

    2010-12-01

    ADOT&PF is seeking more efficient business practices and processes to increase its speed in delivering supplies to work sites, optimize the workforce, and minimize : costs. The current tracking process uses a computer-generated ticket carried by the ...

  8. Capacity planning for batch and perfusion bioprocesses across multiple biopharmaceutical facilities.

    PubMed

    Siganporia, Cyrus C; Ghosh, Soumitra; Daszkowski, Thomas; Papageorgiou, Lazaros G; Farid, Suzanne S

    2014-01-01

    Production planning for biopharmaceutical portfolios becomes more complex when products switch between fed-batch and continuous perfusion culture processes. This article describes the development of a discrete-time mixed integer linear programming (MILP) model to optimize capacity plans for multiple biopharmaceutical products, with either batch or perfusion bioprocesses, across multiple facilities to meet quarterly demands. The model comprised specific features to account for products with fed-batch or perfusion culture processes such as sequence-dependent changeover times, continuous culture constraints, and decoupled upstream and downstream operations that permit independent scheduling of each. Strategic inventory levels were accounted for by applying cost penalties when they were not met. A rolling time horizon methodology was utilized in conjunction with the MILP model and was shown to obtain solutions with greater optimality in less computational time than the full-scale model. The model was applied to an industrial case study to illustrate how the framework aids decisions regarding outsourcing capacity to third party manufacturers or building new facilities. The impact of variations on key parameters such as demand or titres on the optimal production plans and costs was captured. The analysis identified the critical ratio of in-house to contract manufacturing organization (CMO) manufacturing costs that led the optimization results to favor building a future facility over using a CMO. The tool predicted that if titres were higher than expected then the optimal solution would allocate more production to in-house facilities, where manufacturing costs were lower. Utilization graphs indicated when capacity expansion should be considered. © 2014 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  9. Capacity Planning for Batch and Perfusion Bioprocesses Across Multiple Biopharmaceutical Facilities

    PubMed Central

    Siganporia, Cyrus C; Ghosh, Soumitra; Daszkowski, Thomas; Papageorgiou, Lazaros G; Farid, Suzanne S

    2014-01-01

    Production planning for biopharmaceutical portfolios becomes more complex when products switch between fed-batch and continuous perfusion culture processes. This article describes the development of a discrete-time mixed integer linear programming (MILP) model to optimize capacity plans for multiple biopharmaceutical products, with either batch or perfusion bioprocesses, across multiple facilities to meet quarterly demands. The model comprised specific features to account for products with fed-batch or perfusion culture processes such as sequence-dependent changeover times, continuous culture constraints, and decoupled upstream and downstream operations that permit independent scheduling of each. Strategic inventory levels were accounted for by applying cost penalties when they were not met. A rolling time horizon methodology was utilized in conjunction with the MILP model and was shown to obtain solutions with greater optimality in less computational time than the full-scale model. The model was applied to an industrial case study to illustrate how the framework aids decisions regarding outsourcing capacity to third party manufacturers or building new facilities. The impact of variations on key parameters such as demand or titres on the optimal production plans and costs was captured. The analysis identified the critical ratio of in-house to contract manufacturing organization (CMO) manufacturing costs that led the optimization results to favor building a future facility over using a CMO. The tool predicted that if titres were higher than expected then the optimal solution would allocate more production to in-house facilities, where manufacturing costs were lower. Utilization graphs indicated when capacity expansion should be considered. © 2013 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 30:594–606, 2014 PMID:24376262

  10. Developing a conservation strategy to maximize persistence of an endangered freshwater mussel species while considering management effectiveness and cost

    USGS Publications Warehouse

    Smith, David R.; McRae, Sarah E.; Augspurger, Tom; Ratcliffe, Judith A.; Nichols, Robert B.; Eads, Chris B.; Savidge, Tim; Bogan, Arthur E.

    2015-01-01

    We used a structured decision-making process to develop conservation strategies to increase persistence of Dwarf Wedgemussel (Alasmidonta heterodon) in North Carolina, USA, while accounting for uncertainty in management effectiveness and considering costs. Alternative conservation strategies were portfolios of management actions that differed by location of management actions on the landscape. Objectives of the conservation strategy were to maximize species persistence, maintain genetic diversity, maximize public support, and minimize management costs. We compared 4 conservation strategies: 1) the ‘status quo’ strategy represented current management, 2) the ‘protect the best’ strategy focused on protecting the best populations in the Tar River basin, 3) the ‘expand the distribution’ strategy focused on management of extant populations and establishment of new populations in the Neuse River basin, and 4) the ‘hybrid’ strategy combined elements of each strategy to balance conservation in the Tar and Neuse River basins. A population model informed requirements for population management, and experts projected performance of alternative strategies over a 20-y period. The optimal strategy depended on the relative value placed on competing objectives, which can vary among stakeholders. The protect the best and hybrid strategies were optimal across a wide range of relative values with 2 exceptions: 1) if minimizing management cost was of overriding concern, then status quo was optimal, or 2) if maximizing population persistence in the Neuse River basin was emphasized, then expand the distribution strategy was optimal. The optimal strategy was robust to uncertainty in management effectiveness. Overall, the structured decision process can help identify the most promising strategies for endangered species conservation that maximize conservation benefit given the constraint of limited funding.

  11. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  12. EPQ model with learning consideration, imperfect production and partial backlogging in fuzzy random environment

    NASA Astrophysics Data System (ADS)

    Shankar Kumar, Ravi; Goswami, A.

    2015-06-01

    The article scrutinises the learning effect of the unit production time on optimal lot size for the uncertain and imprecise imperfect production process, wherein shortages are permissible and partially backlogged. Contextually, we contemplate the fuzzy chance of production process shifting from an 'in-control' state to an 'out-of-control' state and re-work facility of imperfect quality of produced items. The elapsed time until the process shifts is considered as a fuzzy random variable, and consequently, fuzzy random total cost per unit time is derived. Fuzzy expectation and signed distance method are used to transform the fuzzy random cost function into an equivalent crisp function. The results are illustrated with the help of numerical example. Finally, sensitivity analysis of the optimal solution with respect to major parameters is carried out.

  13. Process simulation and cost analysis for removing inorganics from wood chips using combined mechanical and chemical preprocessing

    DOE PAGES

    Hu, Hongqiang; Westover, Tyler L.; Cherry, Robert; ...

    2016-10-03

    Inorganic species (ash) in biomass feedstocks negatively impact thermochemical and biochemical energy conversion processes. In this work, a process simulation model is developed to model the reduction in ash content of loblolly logging residues using a combination of air classification and dilute-acid leaching. Various scenarios are considered, and it is found that costs associated with discarding high-ash material from air classification are substantial. The costs of material loss can be reduced by chemical leaching the high-ash fraction obtained from air classification. The optimal leaching condition is found to be approximately 0.1 wt% sulfuric acid at 24°C. In example scenarios, totalmore » process costs in the range of $10-12/dry tonnes of product are projected that result in a removal of 11, 66, 53 and 86% of organics, total ash (inorganics), alkaline earth metals and phosphorus (AAEMS+P), and silicon, respectively. Here, sensitivity analyses indicate that costs associated with loss of organic material during processing (yield losses), brine disposal, and labor have the greatest potential to impact the total processing cost.« less

  14. Theory and applications for optimization of every part of a photovoltaic system

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1978-01-01

    A general method is presented for quantitatively optimizing the design of every part and fabrication step of an entire photovoltaic system, based on the criterion of minimum cost/Watt for the system output power. It is shown that no element or process step can be optimized properly by considering only its own cost and performance. Moreover, a fractional performance loss at any fabrication step within the cell or array produces the same fractional increase in the cost/Watt of the entire array, but not of the full system. One general equation is found to be capable of optimizing all parts of a system, although the cell and array steps are basically different from the power-handling elements. Applications of this analysis are given to show (1) when Si wafers should be cut to increase their packing fraction; and (2) what the optimum dimensions for solar cell metallizations are. The optimum shadow fraction of the fine grid is shown to be independent of metal cost and resistivity as well as cell size. The optimum thicknesses of both the fine grid and the bus bar are substantially greater than the values in general use, and the total array cost has a major effect on these values. By analogy, this analysis is adaptable to other solar energy systems.

  15. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A cost effective process sequence and machinery for the production of flat plate photovoltaic modules are described. Cells were fabricated using the process sequence which was optimized, as was a lamination procedure. Insulator tapes and edge seal material were identified and tested. Encapsulation materials were evaluated.

  16. Optimization and planning of operating theatre activities: an original definition of pathways and process modeling.

    PubMed

    Barbagallo, Simone; Corradi, Luca; de Ville de Goyet, Jean; Iannucci, Marina; Porro, Ivan; Rosso, Nicola; Tanfani, Elena; Testi, Angela

    2015-05-17

    The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery. A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning. The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%. The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of planning and, further up-stream, the management of a waiting list in an interactive and bi-directional dynamic process.

  17. Efficient experimental design for uncertainty reduction in gene regulatory networks.

    PubMed

    Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R

    2015-01-01

    An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.

  18. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  19. Design the Cost Approach in Trade-Off's for Structural Components, Illustrated on the Baseline Selection of the Engine Thrust Frame of Ariane 5 ESC-B

    NASA Astrophysics Data System (ADS)

    Appolloni, L.; Juhls, A.; Rieck, U.

    2002-01-01

    Designing for value is one of the very actual upcoming methods for design optimization, which broke into the domain of aerospace engineering in the late 90's. In the frame of designing for value two main design philosophies exist: Design For Cost and Design To Cost. Design To Cost is the iterative redesign of a project until the content of the project meets a given budget. Designing For Cost is the conscious use of engineering process technology to reduce life cycle cost while satisfying, and hopefully exceeding, customer demands. The key to understanding cost, and hence to reducing cost, is the ability to measure cost accurately and to allocate it appropriately to products. Only then can intelligent decisions be made. Therefore the necessity of new methods as "Design For Value" or "Design For Competitiveness", set up with a generally multidisciplinary approach to find an optimized technical solution driven by many parameters, depending on the mission scenario and the customer/market needs. Very often three, but not more than five parametric drivers are sufficient. The more variable exist, the higher is in fact the risk to find just a sub-optimized local and not the global optimum, and the less robust is the found solution against change of input parameters. When the main parameters for optimization have been identified, the system engineer has to communicate them to all design engineers, who shall take care of these assessment variables during the entire design and decision process. The design process which has taken to the definition of the feasible structural concepts for the Engine Thrust Frame of the Ariane 5 Upper Cryogenic Stage ESC-B follows these most actual design philosophy methodologies, and combines a design for cost approach, to a design to cost optimization loop. Ariane 5 is the first member of a family of heavy-lift launchers. It aims to evolve into a family of launchers that responds to the space transportation challenges of the 21st century. New upper stages, along with modifications to the main cryogenic stage and solid boosters, will increase performance and meet demands of a changing market. A two-steps approach was decided for future developments of the launcher upper stage, in order to increase the payload lift capability of Ariane 5. The first step ESC-A is scheduled for first launch in 2002. As later step ESC-B shall grow up to 12 tons in GTO orbit, with multiple restart capability, i.e. re-ignitable engine. Ariane 5 ESC-B first flight is targeted for 2006. It will be loaded with 28 metric tons of liquid oxygen and liquid hydrogen and powered by a new expander cycle engine "Vinci". The Vinci engine will be connected to the tanks of the ESC-B stage via the structure named from the designers ETF, or Engine Thrust Frame. In order to develop a design concept for the ETF component a trade off was performed, based on the most modern system engineering methodologies. This paper will describe the basis of the system engineering approach in the design to cost process, and illustrate such approach as it has been applied during the trade off for the baseline selection of the Engine Thrust Frame of Ariane 5 ESC-B.

  20. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  1. Optimized and Automated design of Plasma Diagnostics for Additive Manufacture

    NASA Astrophysics Data System (ADS)

    Stuber, James; Quinley, Morgan; Melnik, Paul; Sieck, Paul; Smith, Trevor; Chun, Katherine; Woodruff, Simon

    2016-10-01

    Despite having mature designs, diagnostics are usually custom designed for each experiment. Most of the design can be now be automated to reduce costs (engineering labor, and capital cost). We present results from scripted physics modeling and parametric engineering design for common optical and mechanical components found in many plasma diagnostics and outline the process for automated design optimization that employs scripts to communicate data from online forms through proprietary and open-source CAD and FE codes to provide a design that can be sent directly to a printer. As a demonstration of design automation, an optical beam dump, baffle and optical components are designed via an automated process and printed. Supported by DOE SBIR Grant DE-SC0011858.

  2. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network

    PubMed Central

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. Summary We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding. PMID:29773979

  3. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network.

    PubMed

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding.

  4. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  5. On the optimal use of a slow server in two-stage queueing systems

    NASA Astrophysics Data System (ADS)

    Papachristos, Ioannis; Pandelis, Dimitrios G.

    2017-07-01

    We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.

  6. Optimization of 18 F-syntheses using 19 F-reagents at tracer-level concentrations and liquid chromatography/tandem mass spectrometry analysis: Improved synthesis of [18 F]MDL100907.

    PubMed

    Zhang, Xiang; Dunlow, Ryan; Blackman, Burchelle N; Swenson, Rolf E

    2018-05-15

    Traditional radiosynthetic optimization faces the challenges of high radiation exposure, cost, and inability to perform serial reactions due to tracer decay. To accelerate tracer development, we have developed a strategy to simulate radioactive 18 F-syntheses by using tracer-level (nanomolar) non-radioactive 19 F-reagents and LC-MS/MS analysis. The methodology was validated with fallypride synthesis under tracer-level 19 F-conditions, which showed reproducible and comparable results with radiosynthesis, and proved the feasibility of this process. Using this approach, the synthesis of [ 18 F]MDL100907 was optimized under 19 F-conditions with greatly improved yield. The best conditions were successfully transferred to radiosynthesis. A radiochemical yield of 19% to 22% was achieved with the radiochemical purity >99% and the molar activity 38.8 to 53.6 GBq/ μmol (n = 3). The tracer-level 19 F-approach provides a high-throughput and cost-effective process to optimize radiosynthesis with reduced radiation exposure. This new method allows medicinal and synthetic chemists to optimize radiolabeling conditions without the need to use radioactivity. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Maintenance service contract model for heavy equipment in mining industry using principal agent theory

    NASA Astrophysics Data System (ADS)

    Pakpahan, Eka K. A.; Iskandar, Bermawi P.

    2015-12-01

    Mining industry is characterized by a high operational revenue, and hence high availability of heavy equipment used in mining industry is a critical factor to ensure the revenue target. To maintain high avaliability of the heavy equipment, the equipment's owner hires an agent to perform maintenance action. Contract is then used to control the relationship between the two parties involved. The traditional contracts such as fixed price, cost plus or penalty based contract studied is unable to push agent's performance to exceed target, and this in turn would lead to a sub-optimal result (revenue). This research deals with designing maintenance contract compensation schemes. The scheme should induce agent to select the highest possible maintenance effort level, thereby pushing agent's performance and achieve maximum utility for both parties involved. Principal agent theory is used as a modeling approach due to its ability to simultaneously modeled owner and agent decision making process. Compensation schemes considered in this research includes fixed price, cost sharing and revenue sharing. The optimal decision is obtained using a numerical method. The results show that if both parties are risk neutral, then there are infinite combination of fixed price, cost sharing and revenue sharing produced the same optimal solution. The combination of fixed price and cost sharing contract results in the optimal solution when the agent is risk averse, while the optimal combination of fixed price and revenue sharing contract is obtained when agent is risk averse. When both parties are risk averse, the optimal compensation scheme is a combination of fixed price, cost sharing and revenue sharing.

  8. The Study on the Optimization of Container Multimodal Transport Business Process in Shandong

    NASA Astrophysics Data System (ADS)

    Wang, Fengmei; Gong, Xiaoyi; Ni, Yingying; Zhan, Jun; Che, Huiping

    2018-06-01

    Shandong is a coastal city with good location advantages. As a hub port for international trade goods and a port of transhipment, shandong's demand for multimodal transport is more urgent. By selecting the suitable non-water port and the multimodal transport carrier to improve the efficiency of multimodal transport, the purpose of saving the time of logistics is achieved, thus reducing the logistics cost.It branch out through Shandongt, and it can reach the central region of China, can reach the Western remote area ,too. This paper puts forward the optimization scheme of the business process of container multimodal transport. The optimization of freight forwarding business process is analyzed. The multimodal transport model in Shandong was designed. Finally, the optimal approach of multimodal transport in Shandong is put forward.

  9. Optimization of Insertion Cost for Transfer Trajectories to Libration Point Orbits

    NASA Technical Reports Server (NTRS)

    Howell, K. C.; Wilson, R. S.; Lo, M. W.

    1999-01-01

    The objective of this work is the development of efficient techniques to optimize the cost associated with transfer trajectories to libration point orbits in the Sun-Earth-Moon four body problem, that may include lunar gravity assists. Initially, dynamical systems theory is used to determine invariant manifolds associated with the desired libration point orbit. These manifolds are employed to produce an initial approximation to the transfer trajectory. Specific trajectory requirements such as, transfer injection constraints, inclusion of phasing loops, and targeting of a specified state on the manifold are then incorporated into the design of the transfer trajectory. A two level differential corrections process is used to produce a fully continuous trajectory that satisfies the design constraints, and includes appropriate lunar and solar gravitational models. Based on this methodology, and using the manifold structure from dynamical systems theory, a technique is presented to optimize the cost associated with insertion onto a specified libration point orbit.

  10. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  11. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  12. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  13. Operation costs and pollutant emissions reduction by definition of new collection scheduling and optimization of MSW collection routes using GIS. The case study of Barreiro, Portugal.

    PubMed

    Zsigraiova, Zdena; Semiao, Viriato; Beijoco, Filipa

    2013-04-01

    This work proposes an innovative methodology for the reduction of the operation costs and pollutant emissions involved in the waste collection and transportation. Its innovative feature lies in combining vehicle route optimization with that of waste collection scheduling. The latter uses historical data of the filling rate of each container individually to establish the daily circuits of collection points to be visited, which is more realistic than the usual assumption of a single average fill-up rate common to all the system containers. Moreover, this allows for the ahead planning of the collection scheduling, which permits a better system management. The optimization process of the routes to be travelled makes recourse to Geographical Information Systems (GISs) and uses interchangeably two optimization criteria: total spent time and travelled distance. Furthermore, rather than using average values, the relevant parameters influencing fuel consumption and pollutant emissions, such as vehicle speed in different roads and loading weight, are taken into consideration. The established methodology is applied to the glass-waste collection and transportation system of Amarsul S.A., in Barreiro. Moreover, to isolate the influence of the dynamic load on fuel consumption and pollutant emissions a sensitivity analysis of the vehicle loading process is performed. For that, two hypothetical scenarios are tested: one with the collected volume increasing exponentially along the collection path; the other assuming that the collected volume decreases exponentially along the same path. The results evidence unquestionable beneficial impacts of the optimization on both the operation costs (labor and vehicles maintenance and fuel consumption) and pollutant emissions, regardless the optimization criterion used. Nonetheless, such impact is particularly relevant when optimizing for time yielding substantial improvements to the existing system: potential reductions of 62% for the total spent time, 43% for the fuel consumption and 40% for the emitted pollutants. This results in total cost savings of 57%, labor being the greatest contributor, representing over €11,000 per year for the two vehicles collecting glass-waste. Moreover, it is shown herein that the dynamic loading process of the collection vehicle impacts on both the fuel consumption and on pollutant emissions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. ON CONTINUOUS-REVIEW (S-1,S) INVENTORY POLICIES WITH STATE-DEPENDENT LEADTIMES,

    DTIC Science & Technology

    INVENTORY CONTROL, *REPLACEMENT THEORY), MATHEMATICAL MODELS, LEAD TIME , MANAGEMENT ENGINEERING, DISTRIBUTION FUNCTIONS, PROBABILITY, QUEUEING THEORY, COSTS, OPTIMIZATION, STATISTICAL PROCESSES, DIFFERENCE EQUATIONS

  15. Optimization of solar cell contacts by system cost-per-watt minimization

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.

  16. Pavement maintenance optimization model using Markov Decision Processes

    NASA Astrophysics Data System (ADS)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  17. Development of cost-effective surfactant flooding technology. Quarterly report, January 1, 1994--March 31, 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Sepehrnoori, K.

    1994-09-01

    The objective of this research is to develop cost-effective surfactant flooding technology by using surfactant simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics, process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. The goal of Task 2 is to understand and generalize themore » impact of both process and reservoir characteristics on the optimal design of surfactant flooding. We have studied the effect of process parameters such as salinity gradient, surfactant adsorption, surfactant concentration, surfactant slug size, pH, polymer concentration and well constraints on surfactant floods. In this report, we show three dimensional field scale simulation results to illustrate the impact of one important design parameter, the salinity gradient. Although the use of a salinity gradient to improve the efficiency and robustness of surfactant flooding has been studied and applied for many years, this is the first time that we have evaluated it using stochastic simulations rather than simulations using the traditional layered reservoir description. The surfactant flooding simulations were performed using The University of Texas chemical flooding simulator called UTCHEM.« less

  18. Array automated assembly task, phase 2. Low cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Rhee, S. S.; Jones, G. T.; Allison, K. T.

    1978-01-01

    Several modifications instituted in the wafer surface preparation process served to significantly reduce the process cost to 1.55 cents per peak watt in 1975 cents. Performance verification tests of a laser scanning system showed a limited capability to detect hidden cracks or defects, but with potential equipment modifications this cost effective system could be rendered suitable for applications. Installation of electroless nickel plating system was completed along with an optimization of the wafer plating process. The solder coating and flux removal process verification test was completed. An optimum temperature range of 500-550 C was found to produce uniform solder coating with the restriction that a modified dipping procedure is utilized. Finally, the construction of the spray-on dopant equipment was completed.

  19. An optimization method for condition based maintenance of aircraft fleet considering prognostics uncertainty.

    PubMed

    Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie

    2014-01-01

    An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.

  20. An Optimization Method for Condition Based Maintenance of Aircraft Fleet Considering Prognostics Uncertainty

    PubMed Central

    Chen, Yiran; Sun, Bo; Li, Songjie

    2014-01-01

    An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success. PMID:24892046

  1. Noise tolerant illumination optimization applied to display devices

    NASA Astrophysics Data System (ADS)

    Cassarly, William J.; Irving, Bruce

    2005-02-01

    Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.

  2. Rocket Design for the Future

    NASA Technical Reports Server (NTRS)

    Follett, William W.; Rajagopal, Raj

    2001-01-01

    The focus of the AA MDO team is to reduce product development cost through the capture and automation of best design and analysis practices and through increasing the availability of low-cost, high-fidelity analysis. Implementation of robust designs reduces costs associated with the Test-Fall-Fix cycle. RD is currently focusing on several technologies to improve the design process, including optimization and robust design, expert and rule-based systems, and collaborative technologies.

  3. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    NASA Astrophysics Data System (ADS)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  4. Process change evaluation framework for allogeneic cell therapies: impact on drug development and commercialization.

    PubMed

    Hassan, Sally; Huang, Hsini; Warren, Kim; Mahdavi, Behzad; Smith, David; Jong, Simcha; Farid, Suzanne S

    2016-04-01

    Some allogeneic cell therapies requiring a high dose of cells for large indication groups demand a change in cell expansion technology, from planar units to microcarriers in single-use bioreactors for the market phase. The aim was to model the optimal timing for making this change. A development lifecycle cash flow framework was created to examine the implications of process changes to microcarrier cultures at different stages of a cell therapy's lifecycle. The analysis performed under assumptions used in the framework predicted that making this switch earlier in development is optimal from a total expected out-of-pocket cost perspective. From a risk-adjusted net present value view, switching at Phase I is economically competitive but a post-approval switch can offer the highest risk-adjusted net present value as the cost of switching is offset by initial market penetration with planar technologies. The framework can facilitate early decision-making during process development.

  5. [Process-oriented cost calculation in interventional radiology. A case study].

    PubMed

    Mahnken, A H; Bruners, P; Günther, R W; Rasche, C

    2012-01-01

    Currently used costing methods such as cost centre accounting do not sufficiently reflect the process-based resource utilization in medicine. The goal of this study was to establish a process-oriented cost assessment of percutaneous radiofrequency (RF) ablation of liver and lung metastases. In each of 15 patients a detailed task analysis of the primary process of hepatic and pulmonary RF ablation was performed. Based on these data a dedicated cost calculation model was developed for each primary process. The costs of each process were computed and compared with the revenue for in-patients according to the German diagnosis-related groups (DRG) system 2010. The RF ablation of liver metastases in patients without relevant comorbidities and a low patient complexity level results in a loss of EUR 588.44, whereas the treatment of patients with a higher complexity level yields an acceptable profit. The treatment of pulmonary metastases is profitable even in cases of additional expenses due to complications. Process-oriented costing provides relevant information that is needed for understanding the economic impact of treatment decisions. It is well suited as a starting point for economically driven process optimization and reengineering. Under the terms of the German DRG 2010 system percutaneous RF ablation of lung metastases is economically reasonable, while RF ablation of liver metastases in cases of low patient complexity levels does not cover the costs.

  6. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  7. Costing improvement of remanufacturing crankshaft by integrating Mahalanobis-Taguchi System and Activity based Costing

    NASA Astrophysics Data System (ADS)

    Abu, M. Y.; Nor, E. E. Mohd; Rahman, M. S. Abd

    2018-04-01

    Integration between quality and costing system is very crucial in order to achieve an accurate product cost and profit. Current practice by most of remanufacturers, there are still lacking on optimization during the remanufacturing process which contributed to incorrect variables consideration to the costing system. Meanwhile, traditional costing accounting being practice has distortion in the cost unit which lead to inaccurate cost of product. The aim of this work is to identify the critical and non-critical variables during remanufacturing process using Mahalanobis-Taguchi System and simultaneously estimate the cost using Activity Based Costing method. The orthogonal array was applied to indicate the contribution of variables in the factorial effect graph and the critical variables were considered with overhead costs that are actually demanding the activities. This work improved the quality inspection together with costing system to produce an accurate profitability information. As a result, the cost per unit of remanufactured crankshaft of MAN engine model with 5 critical crankpins is MYR609.50 while Detroit engine model with 4 critical crankpins is MYR1254.80. The significant of output demonstrated through promoting green by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.

  8. Piezoresistive Cantilever Performance—Part II: Optimization

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Rastegar, Ali J.; Pruitt, Beth L.

    2010-01-01

    Piezoresistive silicon cantilevers fabricated by ion implantation are frequently used for force, displacement, and chemical sensors due to their low cost and electronic readout. However, the design of piezoresistive cantilevers is not a straightforward problem due to coupling between the design parameters, constraints, process conditions, and performance. We systematically analyzed the effect of design and process parameters on force resolution and then developed an optimization approach to improve force resolution while satisfying various design constraints using simulation results. The combined simulation and optimization approach is extensible to other doping methods beyond ion implantation in principle. The optimization results were validated by fabricating cantilevers with the optimized conditions and characterizing their performance. The measurement results demonstrate that the analytical model accurately predicts force and displacement resolution, and sensitivity and noise tradeoff in optimal cantilever performance. We also performed a comparison between our optimization technique and existing models and demonstrated eight times improvement in force resolution over simplified models. PMID:20333323

  9. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  10. Optimal routing for efficient municipal solid waste transportation by using ArcGIS application in Chennai, India.

    PubMed

    Sanjeevi, V; Shahabudeen, P

    2016-01-01

    Worldwide, about US$410 billion is spent every year to manage four billion tonnes of municipal solid wastes (MSW). Transport cost alone constitutes more than 50% of the total expenditure on solid waste management (SWM) in major cities of the developed world and the collection and transport cost is about 85% in the developing world. There is a need to improve the ability of the city administrators to manage the municipal solid wastes with least cost. Since 2000, new technologies such as geographical information system (GIS) and related optimization software have been used to optimize the haul route distances. The city limits of Chennai were extended from 175 to 426 km(2) in 2011, leading to sub-optimum levels in solid waste transportation of 4840 tonnes per day. After developing a spatial database for the whole of Chennai with 200 wards, the route optimization procedures have been run for the transport of solid wastes from 13 wards (generating nodes) to one transfer station (intermediary before landfill), using ArcGIS. The optimization process reduced the distances travelled by 9.93%. The annual total cost incurred for this segment alone is Indian Rupees (INR) 226.1 million. Savings in terms of time taken for both the current and shortest paths have also been computed, considering traffic conditions. The overall savings are thus very meaningful and call for optimization of the haul routes for the entire Chennai. © The Author(s) 2015.

  11. Inclusion of tank configurations as a variable in the cost optimization of branched piped-water networks

    NASA Astrophysics Data System (ADS)

    Hooda, Nikhil; Damani, Om

    2017-06-01

    The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.

  12. Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, A.; Biegler, L.; Zitney, S.

    2009-01-01

    Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approachmore » to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.« less

  13. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Ship Maintenance Processes with Collaborative Product Lifecycle Management and 3D Terrestrial Laser Scanning Tools: Reducing Costs and Increasing Productivity

    DTIC Science & Technology

    2011-09-20

    optimal portfolio point on the efficient frontier, for example, Portfolio B on the chart in Figure A1. Then, by subsequently changing some of the ... optimized portfolio controlling for risk using the IRM methodology and tool suite. Results indicate that both rapid and incremental implementation...Results of the KVA and SD scenario analysis provided the financial information required to forecast an optimized

  15. Stirling heat pump external heat systems - An appliance perspective

    NASA Astrophysics Data System (ADS)

    Vasilakis, Andrew D.; Thomas, John F.

    A major issue facing the Stirling Engine Heat Pump is system cost, and, in particular, the cost of the External Heat System (EHS). The need for high temperature at the heater head (600 C to 700 C) results in low combustion system efficiencies unless efficient heat recovery is employed. The balance between energy efficiency and use of costly high temperature materials is critical to design and cost optimization. Blower power consumption and NO(x) emissions are also important. A new approach to the design and cost optimization of the EHS was taken by viewing the system from a natural gas-fired appliance perspective. To develop a design acceptable to gas industry requirements, American National Standards Institute (ANSI) code considerations were incorporated into the design process and material selections. A parametric engineering design and cost model was developed to perform the analysis, including the impact of design on NO(x) emissions. Analysis results and recommended EHS design and material choices are given.

  16. Stirling heat pump external heat systems: An appliance perspective

    NASA Astrophysics Data System (ADS)

    Vasilakis, A. D.; Thomas, J. F.

    1992-08-01

    A major issue facing the Stirling Engine Heat Pump is system cost, and, in particular, the cost of the External Heat System (EHS). The need for high temperature at the heater head (600 C to 700 C) results in low combustion system efficiencies unless efficient heat recovery is employed. The balance between energy efficiency and use of costly high temperature materials is critical to design and cost optimization. Blower power consumption and NO(x) emissions are also important. A new approach to the design and cost optimization of the EHS system was taken by viewing the system from a natural gas-fired appliance perspective. To develop a design acceptable to gas industry requirements, American National Standards Institute (ANSI) code considerations were incorporated into the design process and material selections. A parametric engineering design and cost model was developed to perform the analysis, including the impact of design on NO(x) emissions. Analysis results and recommended EHS design and material choices are given.

  17. AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT

    NASA Astrophysics Data System (ADS)

    Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi

    In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.

  18. The option value of delay in health technology assessment.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2008-01-01

    Processes of health technology assessment (HTA) inform decisions under uncertainty about whether to invest in new technologies based on evidence of incremental effects, incremental cost, and incremental net benefit monetary (INMB). An option value to delaying such decisions to wait for further evidence is suggested in the usual case of interest, in which the prior distribution of INMB is positive but uncertain. of estimating the option value of delaying decisions to invest have previously been developed when investments are irreversible with an uncertain payoff over time and information is assumed fixed. However, in HTA decision uncertainty relates to information (evidence) on the distribution of INMB. This article demonstrates that the option value of delaying decisions to allow collection of further evidence can be estimated as the expected value of sample of information (EVSI). For irreversible decisions, delay and trial (DT) is demonstrated to be preferred to adopt and no trial (AN) when the EVSI exceeds expected costs of information, including expected opportunity costs of not treating patients with the new therapy. For reversible decisions, adopt and trial (AT) becomes a potentially optimal strategy, but costs of reversal are shown to reduce the EVSI of this strategy due to both a lower probability of reversal being optimal and lower payoffs when reversal is optimal. Hence, decision makers are generally shown to face joint research and reimbursement decisions (AN, DT and AT), with the optimal choice dependent on costs of reversal as well as opportunity costs of delay and the distribution of prior INMB.

  19. Cost minimizing of cutting process for CNC thermal and water-jet machines

    NASA Astrophysics Data System (ADS)

    Tavaeva, Anastasia; Kurennov, Dmitry

    2015-11-01

    This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.

  20. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The paper describes the computational techniques employed in determining the optimal propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. The computer programs used to perform calculations for all the factors that enter into the selection process of determining the optimum combinations of airplanes and engines are examined. Attention is given to the description of the computer codes including NNEP, WATE, LIFCYC, INSTAL, and POD DRG. A process is illustrated by which turbine engines can be evaluated as to fuel consumption, engine weight, cost and installation effects. Examples are shown as to the benefits of variable geometry and of the tradeoff between fuel burned and engine weights. Future plans for further improvements in the analytical modeling of engine systems are also described.

  1. A methodology to guide the selection of composite materials in a wind turbine rotor blade design process

    NASA Astrophysics Data System (ADS)

    Bortolotti, P.; Adolphs, G.; Bottasso, C. L.

    2016-09-01

    This work is concerned with the development of an optimization methodology for the composite materials used in wind turbine blades. Goal of the approach is to guide designers in the selection of the different materials of the blade, while providing indications to composite manufacturers on optimal trade-offs between mechanical properties and material costs. The method works by using a parametric material model, and including its free parameters amongst the design variables of a multi-disciplinary wind turbine optimization procedure. The proposed method is tested on the structural redesign of a conceptual 10 MW wind turbine blade, its spar caps and shell skin laminates being subjected to optimization. The procedure identifies a blade optimum for a new spar cap laminate characterized by a higher longitudinal Young's modulus and higher cost than the initial one, which however in turn induce both cost and mass savings in the blade. In terms of shell skin, the adoption of a laminate with intermediate properties between a bi-axial one and a tri-axial one also leads to slight structural improvements.

  2. A Computational approach in optimizing process parameters of GTAW for SA 106 Grade B steel pipes using Response surface methodology

    NASA Astrophysics Data System (ADS)

    Sumesh, A.; Sai Ramnadh, L. V.; Manish, P.; Harnath, V.; Lakshman, V.

    2016-09-01

    Welding is one of the most common metal joining techniques used in industry for decades. As in the global manufacturing scenario the products should be more cost effective. Therefore the selection of right process with optimal parameters will help the industry in minimizing their cost of production. SA 106 Grade B steel has a wide application in Automobile chassis structure, Boiler tubes and pressure vessels industries. Employing central composite design the process parameters for Gas Tungsten Arc Welding was optimized. The input parameters chosen were weld current, peak current and frequency. The joint tensile strength was the response considered in this study. Analysis of variance was performed to determine the statistical significance of the parameters and a Regression analysis was performed to determine the effect of input parameters over the response. From the experiment the maximum tensile strength obtained was 95 KN reported for a weld current of 95 Amp, frequency of 50 Hz and peak current of 100 Amp. With an aim of maximizing the joint strength using Response optimizer a target value of 100 KN is selected and regression models were optimized. The output results are achievable with a Weld current of 62.6148 Amp, Frequency of 23.1821 Hz, and Peak current of 65.9104 Amp. Using Die penetration test the weld joints were also classified in to 2 categories as good weld and weld with defect. This will also help in getting a defect free joint when welding is performed using GTAW process.

  3. Synthetic spider silk sustainability verification by techno-economic and life cycle analysis

    NASA Astrophysics Data System (ADS)

    Edlund, Alan

    Major ampullate spider silk represents a promising biomaterial with diverse commercial potential ranging from textiles to medical devices due to the excellent physical and thermal properties from the protein structure. Recent advancements in synthetic biology have facilitated the development of recombinant spider silk proteins from Escherichia coli (E. coli), alfalfa, and goats. This study specifically investigates the economic feasibility and environmental impact of synthetic spider silk manufacturing. Pilot scale data was used to validate an engineering process model that includes all of the required sub-processing steps for synthetic fiber manufacture: production, harvesting, purification, drying, and spinning. Modeling was constructed modularly to support assessment of alternative protein production methods (alfalfa and goats) as well as alternative down-stream processing technologies. The techno-economic analysis indicates a minimum sale price from pioneer and optimized E. coli plants at 761 kg-1 and 23 kg-1 with greenhouse gas emissions of 572 kg CO2-eq. kg-1 and 55 kg CO2-eq. kg-1, respectively. Spider silk sale price estimates from goat pioneer and optimized results are 730 kg-1 and 54 kg-1, respectively, with pioneer and optimized alfalfa plants are 207 kg-1 and 9.22 kg-1 respectively. Elevated costs and emissions from the pioneer plant can be directly tied to the high material consumption and low protein yield. Decreased production costs associated with the optimized plants include improved protein yield, process optimization, and an Nth plant assumption. Discussion focuses on the commercial potential of spider silk, the production performance requirements for commercialization, and impact of alternative technologies on the sustainability of the system.

  4. Management of unmanned moving sensors through human decision layers: a bi-level optimization process with calls to costly sub-processes

    NASA Astrophysics Data System (ADS)

    Dambreville, Frédéric

    2013-10-01

    While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.

  5. FT-NIR: A Tool for Process Monitoring and More.

    PubMed

    Martoccia, Domenico; Lutz, Holger; Cohen, Yvan; Jerphagnon, Thomas; Jenelten, Urban

    2018-03-30

    With ever-increasing pressure to optimize product quality, to reduce cost and to safely increase production output from existing assets, all combined with regular changes in terms of feedstock and operational targets, process monitoring with traditional instruments reaches its limits. One promising answer to these challenges is in-line, real time process analysis with spectroscopic instruments, and above all Fourier-Transform Near Infrared spectroscopy (FT-NIR). Its potential to afford decreased batch cycle times, higher yields, reduced rework and minimized batch variance is presented and application examples in the field of fine chemicals are given. We demonstrate that FT-NIR can be an efficient tool for improved process monitoring and optimization, effective process design and advanced process control.

  6. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  7. Planar junctionless phototransistor: A potential high-performance and low-cost device for optical-communications

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new junctionless optical controlled field effect transistor (JL-OCFET) and its comprehensive theoretical model is proposed to achieve high optical performance and low cost fabrication process. Exhaustive study of the device characteristics and comparison between the proposed junctionless design and the conventional inversion mode structure (IM-OCFET) for similar dimensions are performed. Our investigation reveals that the proposed design exhibits an outstanding capability to be an alternative to the IM-OCFET due to the high performance and the weak signal detection benefit offered by this design. Moreover, the developed analytical expressions are exploited to formulate the objective functions to optimize the device performance using Genetic Algorithms (GAs) approach. The optimized JL-OCFET not only demonstrates good performance in terms of derived drain current and responsivity, but also exhibits superior signal to noise ratio, low power consumption, high-sensitivity, high ION/IOFF ratio and high-detectivity as compared to the conventional IM-OCFET counterpart. These characteristics make the optimized JL-OCFET potentially suitable for developing low cost and ultrasensitive photodetectors for high-performance and low cost inter-chips data communication applications.

  8. Modeling the Downstream Processing of Monoclonal Antibodies Reveals Cost Advantages for Continuous Methods for a Broad Range of Manufacturing Scales.

    PubMed

    Hummel, Jonathan; Pagkaliwangan, Mark; Gjoka, Xhorxhi; Davidovits, Terence; Stock, Rick; Ransohoff, Thomas; Gantier, Rene; Schofield, Mark

    2018-01-17

    The biopharmaceutical industry is evolving in response to changing market conditions, including increasing competition and growing pressures to reduce costs. Single-use (SU) technologies and continuous bioprocessing have attracted attention as potential facilitators of cost-optimized manufacturing for monoclonal antibodies. While disposable bioprocessing has been adopted at many scales of manufacturing, continuous bioprocessing has yet to reach the same level of implementation. In this study, the cost of goods of Pall Life Science's integrated, continuous bioprocessing (ICB) platform is modeled, along with that of purification processes in stainless-steel and SU batch formats. All three models include costs associated with downstream processing only. Evaluation of the models across a broad range of clinical and commercial scenarios reveal that the cost savings gained by switching from stainless-steel to SU batch processing are often amplified by continuous operation. The continuous platform exhibits the lowest cost of goods across 78% of all scenarios modeled here, with the SU batch process having the lowest costs in the rest of the cases. The relative savings demonstrated by the continuous process are greatest at the highest feed titers and volumes. These findings indicate that existing and imminent continuous technologies and equipment can become key enablers for more cost effective manufacturing of biopharmaceuticals. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Impacts of supplyshed-level differences in productivity and land Costs on the economics of hybrid poplar production in Minnesota, USA

    Treesearch

    William Lazarus; William L. Headlee; Ronald S. Zalesny

    2015-01-01

    The joint effects of poplar biomass productivity and land costs on poplar production economics were compared for 12 Minnesota counties and two genetic groups, using a process-based model (3-PG) to estimate productivity. The counties represent three levels of productivity and a range of land costs (annual rental rates) from $128/ha to $534/ha. An optimal rotation age...

  10. Synergetic motor control paradigm for optimizing energy efficiency of multijoint reaching via tacit learning

    PubMed Central

    Hayashibe, Mitsuhiro; Shimoda, Shingo

    2014-01-01

    A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach. PMID:24616695

  11. Synergetic motor control paradigm for optimizing energy efficiency of multijoint reaching via tacit learning.

    PubMed

    Hayashibe, Mitsuhiro; Shimoda, Shingo

    2014-01-01

    A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.

  12. Laboratory cost control and financial management software.

    PubMed

    Mayer, M

    1998-02-09

    Economical constraints within the health care system advocate the introduction of tighter control of costs in clinical laboratories. Detailed cost information forms the basis for cost control and financial management. Based on the cost information, proper decisions regarding priorities, procedure choices, personnel policies and investments can be made. This presentation outlines some principles of cost analysis, describes common limitations of cost analysis, and exemplifies use of software to achieve optimized cost control. One commercially available cost analysis software, LabCost, is described in some detail. In addition to provision of cost information, LabCost also serves as a general management tool for resource handling, accounting, inventory management and billing. The application of LabCost in the selection process of a new high throughput analyzer for a large clinical chemistry service is taken as an example for decisions that can be assisted by cost evaluation. It is concluded that laboratory management that wisely utilizes cost analysis to support the decision-making process will undoubtedly have a clear advantage over those laboratories that fail to employ cost considerations to guide their actions.

  13. Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach

    NASA Astrophysics Data System (ADS)

    Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.

    2017-07-01

    The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitney, S.E.

    Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities formore » enterprise-wide optimization, including planning, scheduling, and supply chain technologies.« less

  15. On process optimization considering LCA methodology.

    PubMed

    Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío

    2012-04-15

    The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Multidisciplinary Design Optimization for Glass-Fiber Epoxy-Matrix Composite 5 MW Horizontal-Axis Wind-Turbine Blades

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Sellappan, V.; Vallejo, A.; Ozen, M.

    2010-11-01

    A multi-disciplinary design-optimization procedure has been introduced and used for the development of cost-effective glass-fiber reinforced epoxy-matrix composite 5 MW horizontal-axis wind-turbine (HAWT) blades. The turbine-blade cost-effectiveness has been defined using the cost of energy (CoE), i.e., a ratio of the three-blade HAWT rotor development/fabrication cost and the associated annual energy production. To assess the annual energy production as a function of the blade design and operating conditions, an aerodynamics-based computational analysis had to be employed. As far as the turbine blade cost is concerned, it is assessed for a given aerodynamic design by separately computing the blade mass and the associated blade-mass/size-dependent production cost. For each aerodynamic design analyzed, a structural finite element-based and a post-processing life-cycle assessment analyses were employed in order to determine a minimal blade mass which ensures that the functional requirements pertaining to the quasi-static strength of the blade, fatigue-controlled blade durability and blade stiffness are satisfied. To determine the turbine-blade production cost (for the currently prevailing fabrication process, the wet lay-up) available data regarding the industry manufacturing experience were combined with the attendant blade mass, surface area, and the duration of the assumed production run. The work clearly revealed the challenges associated with simultaneously satisfying the strength, durability and stiffness requirements while maintaining a high level of wind-energy capture efficiency and a lower production cost.

  17. Metal matrix composite fabrication processes for high performance aerospace structures

    NASA Astrophysics Data System (ADS)

    Ponzi, C.

    A survey is conducted of extant methods of metal matrix composite (MMC) production in order to serve as a basis for prospective MMC users' selection of a matrix/reinforcement combination, cost-effective primary fabrication methods, and secondary fabrication techniques for the achievement of desired performance levels. Attention is given to the illustrative cases of structural fittings, control-surface connecting rods, hypersonic aircraft air inlet ramps, helicopter swash plates, and turbine rotor disks. Methods for technical and cost analysis modeling useful in process optimization are noted.

  18. Efficient Research Design: Using Value-of-Information Analysis to Estimate the Optimal Mix of Top-down and Bottom-up Costing Approaches in an Economic Evaluation alongside a Clinical Trial.

    PubMed

    Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee

    2016-04-01

    In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.

  19. Up-cycling waste glass to minimal water adsorption/absorption lightweight aggregate by rapid low temperature sintering: optimization by dual process-mixture response surface methodology.

    PubMed

    Velis, Costas A; Franco-Salinas, Claudia; O'Sullivan, Catherine; Najorka, Jens; Boccaccini, Aldo R; Cheeseman, Christopher R

    2014-07-01

    Mixed color waste glass extracted from municipal solid waste is either not recycled, in which case it is an environmental and financial liability, or it is used in relatively low value applications such as normal weight aggregate. Here, we report on converting it into a novel glass-ceramic lightweight aggregate (LWA), potentially suitable for high added value applications in structural concrete (upcycling). The artificial LWA particles were formed by rapidly sintering (<10 min) waste glass powder with clay mixes using sodium silicate as binder and borate salt as flux. Composition and processing were optimized using response surface methodology (RSM) modeling, and specifically (i) a combined process-mixture dual RSM, and (ii) multiobjective optimization functions. The optimization considered raw materials and energy costs. Mineralogical and physical transformations occur during sintering and a cellular vesicular glass-ceramic composite microstructure is formed, with strong correlations existing between bloating/shrinkage during sintering, density and water adsorption/absorption. The diametrical expansion could be effectively modeled via the RSM and controlled to meet a wide range of specifications; here we optimized for LWA structural concrete. The optimally designed LWA is sintered in comparatively low temperatures (825-835 °C), thus potentially saving costs and lowering emissions; it had exceptionally low water adsorption/absorption (6.1-7.2% w/wd; optimization target: 1.5-7.5% w/wd); while remaining substantially lightweight (density: 1.24-1.28 g.cm(-3); target: 0.9-1.3 g.cm(-3)). This is a considerable advancement for designing effective environmentally friendly lightweight concrete constructions, and boosting resource efficiency of waste glass flows.

  20. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  1. A random optimization approach for inherent optic properties of nearshore waters

    NASA Astrophysics Data System (ADS)

    Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng

    2016-10-01

    Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.

  2. Excimer laser annealing to fabricate low cost solar cells

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The objective is to show whether or not pulsed excimer laser annealing (PELA) of ion-implanted junctions is a cost effective replacement for diffused junctions in fabricating crystalline silicon solar cells. The preliminary economic analysis completed shows that the use of PELA to fabricate both the front junction and back surface field (BSF) would cost approximately 35 cents per peak watt (Wp), compared to a cost of 15 cents/Wp for diffusion, aluminum BSF and an extra cleaning step in the baseline process. The cost advantage of the PELA process depends on improving the average cell efficiency from 14% to 16%, which would lower the overall cost of the module by about 15 cents/Wp. An optimized PELA process compatible with commercial production is to be developed, and increased cell efficiency with sufficient product for adequate statistical analysis demonstrated. An excimer laser annealing station was set-up and made operational. The first experiment used 248 nm radiation to anneal phosphorus implants in polished and texture-etched silicon.

  3. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Continuous processing and the applications of online tools in pharmaceutical product manufacture: developments and examples.

    PubMed

    Ooi, Shing Ming; Sarkar, Srimanta; van Varenbergh, Griet; Schoeters, Kris; Heng, Paul Wan Sia

    2013-04-01

    Continuous processing and production in pharmaceutical manufacturing has received increased attention in recent years mainly due to the industries' pressing needs for more efficient, cost-effective processes and production, as well as regulatory facilitation. To achieve optimum product quality, the traditional trial-and-error method for the optimization of different process and formulation parameters is expensive and time consuming. Real-time evaluation and the control of product quality using an online process analyzer in continuous processing can provide high-quality production with very high-throughput at low unit cost. This review focuses on continuous processing and the application of different real-time monitoring tools used in the pharmaceutical industry for continuous processing from powder to tablets.

  5. Strategie de commande optimale de la production electrique dans un site isole

    NASA Astrophysics Data System (ADS)

    Barris, Nicolas

    Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.

  6. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  7. A Dynamic Programming Model for Optimizing Frequency of Time-Lapse Seismic Monitoring in Geological CO2 Storage

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, D.; Mukerji, T.; Mascarenhas, O.; Weyant, J.

    2005-12-01

    Designing a cost-effective and reliable monitoring program is crucial to the success of any geological CO2 storage project. Effective design entails determining both, the optimal measurement modality, as well as the frequency of monitoring the site. Time-lapse seismic provides the best spatial coverage and resolution for reservoir monitoring. Initial results from Sleipner (Norway) have demonstrated effective monitoring of CO2 plume movement. However, time-lapse seismic is an expensive monitoring technique especially over the long term life of a storage project and should be used judiciously. We present a mathematical model based on dynamic programming that can be used to estimate site-specific optimal frequency of time-lapse surveys. The dynamics of the CO2 sequestration process are simplified and modeled as a four state Markov process with transition probabilities. The states are M: injected CO2 safely migrating within the target zone; L: leakage from the target zone to the adjacent geosphere; R: safe migration after recovery from leakage state; and S: seepage from geosphere to the biosphere. The states are observed only when a monitoring survey is performed. We assume that the system may go to state S only from state L. We also assume that once observed to be in state L, remedial measures are always taken to bring it back to state R. Remediation benefits are captured by calculating the expected penalty if CO2 seeped into the biosphere. There is a trade-off between the conflicting objectives of minimum discounted costs of performing the next time-lapse survey and minimum risk of seepage and its associated costly consequences. A survey performed earlier would spot the leakage earlier. Remediation methods would have been utilized earlier, resulting in savings in costs attributed to excessive seepage. On the other hand, there are also costs for the survey and remedial measures. The problem is solved numerically using Bellman's optimality principal of dynamic programming to optimize over the entire finite time horizon. We use a Monte Carlo approach to explore trade-offs between survey costs, remediation costs, and survey frequency and to analyze the sensitivity to leakage probabilities, and carbon tax. The model can be useful in determining a monitoring regime appropriate to a specific site's risk and set of remediation options, rather than a generic one based on a maximum downside risk threshold for CO2 storage as a whole. This may have implications on the overall costs associated with deploying Carbon capture and storage on a large scale.

  8. High-productivity DRIE solutions for 3D-SiP and MEMS volume manufacturing

    NASA Astrophysics Data System (ADS)

    Puech, M.; Thevenoud, J. M.; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, J. M.

    2006-12-01

    Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimized to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters have shown ultra high silicon etch rate, with unrivaled uniformity and repeatability leading to excellent process yields. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. A key factor for achieving the highest performances was the recognized expertise of Alcatel vacuum and plasma science technologies. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer head and Silicon microphones.

  9. Process control systems: integrated for future process technologies

    NASA Astrophysics Data System (ADS)

    Botros, Youssry; Hajj, Hazem M.

    2003-06-01

    Process Control Systems (PCS) are becoming more crucial to the success of Integrated Circuit makers due to their direct impact on product quality, cost, and Fab output. The primary objective of PCS is to minimize variability by detecting and correcting non optimal performance. Current PCS implementations are considered disparate, where each PCS application is designed, deployed and supported separately. Each implementation targets a specific area of control such as equipment performance, wafer manufacturing, and process health monitoring. With Intel entering the nanometer technology era, tighter process specifications are required for higher yields and lower cost. This requires areas of control to be tightly coupled and integrated to achieve the optimal performance. This requirement can be achieved via consistent design and deployment of the integrated PCS. PCS integration will result in several benefits such as leveraging commonalities, avoiding redundancy, and facilitating sharing between implementations. This paper will address PCS implementations and focus on benefits and requirements of the integrated PCS. Intel integrated PCS Architecture will be then presented and its components will be briefly discussed. Finally, industry direction and efforts to standardize PCS interfaces that enable PCS integration will be presented.

  10. Dopaminergic Balance between Reward Maximization and Policy Complexity

    PubMed Central

    Parush, Naama; Tishby, Naftali; Bergman, Hagai

    2011-01-01

    Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor). Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost) and maximizing the expected future reward (gain). We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative) reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the tradeoff between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems. PMID:21603228

  11. Increasing energy efficiency level of building production based on applying modern mechanization facilities

    NASA Astrophysics Data System (ADS)

    Prokhorov, Sergey

    2017-10-01

    Building industry in a present day going through the hard times. Machine and mechanism exploitation cost, on a field of construction and installation works, takes a substantial part in total building construction expenses. There is a necessity to elaborate high efficient method, which allows not only to increase production, but also to reduce direct costs during machine fleet exploitation, and to increase its energy efficiency. In order to achieve the goal we plan to use modern methods of work production, hi-tech and energy saving machine tools and technologies, and use of optimal mechanization sets. As the optimization criteria there are exploitation prime cost and set efficiency. During actual task-solving process we made a conclusion, which shows that mechanization works, energy audit with production juxtaposition, prime prices and costs for energy resources allow to make complex machine fleet supply, improve ecological level and increase construction and installation work quality.

  12. Decision support for operations and maintenance (DSOM) system

    DOEpatents

    Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA

    2006-03-21

    A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.

  13. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  14. Optimized treatment conditions for textile wastewater reuse using photocatalytic processes under UV and visible light sources.

    PubMed

    Starling, Maria Clara V M; Castro, Luiz Augusto S; Marcelino, Rafaela B P; Leão, Mônica M D; Amorim, Camila C

    2017-03-01

    In this study, photo-Fenton systems using visible light sources with iron and ferrioxalate were tested for the DOC degradation and decolorization of textile wastewater. Textile wastewaters originated after the dyeing stage of dark-colored tissue in the textile industry, and the optimization of treatment processes was studied to produce water suitable for reuse. Dissolved organic carbon, absorbance, turbidity, anionic concentrations, carboxylic acids, and preliminary cost analysis were performed for the proposed treatments. Conventional photo-Fenton process achieved near 99 % DOC degradation rates and complete absorbance removal, and no carboxylic acids were found as products of degradation. Ferrioxalate photo-Fenton system achieved 82 % of DOC degradation and showed complete absorbance removal, and oxalic acid has been detected through HPLC analysis in the treated sample. In contrast, photo-peroxidation with UV light was proved effective only for absorbance removal, with DOC degradation efficiency near 50 %. Treated wastewater was compared with reclaimed water and had a similar quality, indicating that these processes can be effectively applied for textile wastewater reuse. The results of the preliminary cost analysis indicated costs of 0.91 to 1.07 US$ m -3 for the conventional and ferrioxalate photo-Fenton systems, respectively. Graphical Abstract ᅟ.

  15. Cryogenic Treatment of Al-Li Alloys for Improved Weldability, Repairability, and Reduction of Residual Stresses

    NASA Technical Reports Server (NTRS)

    Malone, Tina W.; Graham, Benny F.; Gentz, Steven J. (Technical Monitor)

    2001-01-01

    Service performance has shown that cryogenic treatment of some metals provides improved strength, fatigue life, and wear resistance to the processed material. Effects such as these were initially discovered by NASA engineers while evaluating spacecraft that had returned from the cold vacuum of space. Factors such as high cost, poor repairability, and poor machinability are currently prohibitive for wide range use of some aerospace aluminum alloys. Application of a cryogenic treatment process to these alloys is expected provide improvements in weldability and weld properties coupled with a reduction in repairs resulting in a significant reduction in the cost to manufacture and life cycle cost of aerospace hardware. The primary purpose of this effort was to evaluate the effects of deep cryogenic treatment of some aluminum alloy plate products, welds, and weld repairs, and optimize a process for the treatment of these materials. The optimized process is being evaluated for improvements in properties of plate and welds, improvements in weldability and repairability of treated materials, and as an alternative technique for the reduction of residual stresses in repaired welds. This paper will present the results of testing and evaluation conducted in this effort. These results will include assessments of changes in strength, toughness, stress corrosion susceptability, weldability, repairability, and reduction in residual stresses of repaired welds.

  16. Do Vascular Networks Branch Optimally or Randomly across Spatial Scales?

    PubMed Central

    Newberry, Mitchell G.; Savage, Van M.

    2016-01-01

    Modern models that derive allometric relationships between metabolic rate and body mass are based on the architectural design of the cardiovascular system and presume sibling vessels are symmetric in terms of radius, length, flow rate, and pressure. Here, we study the cardiovascular structure of the human head and torso and of a mouse lung based on three-dimensional images processed via our software Angicart. In contrast to modern allometric theories, we find systematic patterns of asymmetry in vascular branching, potentially explaining previously documented mismatches between predictions (power-law or concave curvature) and observed empirical data (convex curvature) for the allometric scaling of metabolic rate. To examine why these systematic asymmetries in vascular branching might arise, we construct a mathematical framework to derive predictions based on local, junction-level optimality principles that have been proposed to be favored in the course of natural selection and development. The two most commonly used principles are material-cost optimizations (construction materials or blood volume) and optimization of efficient flow via minimization of power loss. We show that material-cost optimization solutions match with distributions for asymmetric branching across the whole network but do not match well for individual junctions. Consequently, we also explore random branching that is constrained at scales that range from local (junction-level) to global (whole network). We find that material-cost optimizations are the strongest predictor of vascular branching in the human head and torso, whereas locally or intermediately constrained random branching is comparable to material-cost optimizations for the mouse lung. These differences could be attributable to developmentally-programmed local branching for larger vessels and constrained random branching for smaller vessels. PMID:27902691

  17. USING WASTE TO CLEAN UP THE ENVIRONMENT: CELLULOSIC ETHANOL, THE FUTURE OF FUELS

    EPA Science Inventory

    In the process of converting municipal solid waste (MSW) into ethanol we optimized the first two major steps of pretreatment and enzymatic hydrolysis stages to enhance the sugar yield and to reduce the cost. For the pretreatment process, we tested different parameters of react...

  18. Defect design of insulation systems for photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.

    1981-01-01

    A defect-design approach to sizing electrical insulation systems for terrestrial photovoltaic modules is presented. It consists of gathering voltage-breakdown statistics on various thicknesses of candidate insulation films where, for a designated voltage, module failure probabilities for enumerated thickness and number-of-layer film combinations are calculated. Cost analysis then selects the most economical insulation system. A manufacturing yield problem is solved to exemplify the technique. Results for unaged Mylar suggest using fewer layers of thicker films. Defect design incorporates effects of flaws in optimal insulation system selection, and obviates choosing a tolerable failure rate, since the optimization process accomplishes that. Exposure to weathering and voltage stress reduces the voltage-withstanding capability of module insulation films. Defect design, applied to aged polyester films, promises to yield reliable, cost-optimal insulation systems.

  19. Foregone benefits of important food crop improvements in Sub-Saharan Africa

    PubMed Central

    2017-01-01

    A number of new crops have been developed that address important traits of particular relevance for smallholder farmers in Africa. Scientists, policy makers, and other stakeholders have raised concerns that the approval process for these new crops causes delays that are often scientifically unjustified. This article develops a real option model for the optimal regulation of a risky technology that enhances economic welfare and reduces malnutrition. We consider gradual adoption of the technology and show that delaying approval reduces uncertainty about perceived risks of the technology. Optimal conditions for approval incorporate parameters of the stochastic processes governing the dynamics of risk. The model is applied to three cases of improved crops, which either are, or are expected to be, delayed by the regulatory process. The benefits and costs of the crops are presented in a partial equilibrium that considers changes in adoption over time and the foregone benefits caused by a delay in approval under irreversibility and uncertainty. We derive the equilibrium conditions where the net-benefits of the technology equal the costs that would justify a delay. The sooner information about the safety of the technology arrive, the lower the costs for justifying a delay need to be i.e. it pays more to delay. The costs of a delay can be substantial: e.g. a one year delay in approval of the pod-borer resistant cowpea in Nigeria will cost the country about 33 million USD to 46 million USD and between 100 and 3,000 lives. PMID:28749984

  20. Simulation and Optimization of Large Scale Subsurface Environmental Impacts; Investigations, Remedial Design and Long Term Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deschaine, L.M.; Chalmers Univ. of Technology, Dept. of Physical Resources, Complex Systems Group, Goteborg

    2008-07-01

    The global impact to human health and the environment from large scale chemical / radionuclide releases is well documented. Examples are the wide spread release of radionuclides from the Chernobyl nuclear reactors, the mobilization of arsenic in Bangladesh, the formation of Environmental Protection Agencies in the United States, Canada and Europe, and the like. The fiscal costs of addressing and remediating these issues on a global scale are astronomical, but then so are the fiscal and human health costs of ignoring them. An integrated methodology for optimizing the response(s) to these issues is needed. This work addresses development of optimalmore » policy design for large scale, complex, environmental issues. It discusses the development, capabilities, and application of a hybrid system of algorithms that optimizes the environmental response. It is important to note that 'optimization' does not singularly refer to cost minimization, but to the effective and efficient balance of cost, performance, risk, management, and societal priorities along with uncertainty analysis. This tool integrates all of these elements into a single decision framework. It provides a consistent approach to designing optimal solutions that are tractable, traceable, and defensible. The system is modular and scalable. It can be applied either as individual components or in total. By developing the approach in a complex systems framework, a solution methodology represents a significant improvement over the non-optimal 'trial and error' approach to environmental response(s). Subsurface environmental processes are represented by linear and non-linear, elliptic and parabolic equations. The state equations solved using numerical methods include multi-phase flow (water, soil gas, NAPL), and multicomponent transport (radionuclides, heavy metals, volatile organics, explosives, etc.). Genetic programming is used to generate the simulators either when simulation models do not exist, or to extend the accuracy of them. The uncertainty and sparse nature of information in earth science simulations necessitate stochastic representations. For discussion purposes, the solution to these site-wide challenges is divided into three sub-components; plume finding, long term monitoring, and site-wide remediation. Plume finding is the optimal estimation of the plume fringe(s) at a specified time. It is optimized by fusing geo-stochastic flow and transport simulations with the information content of data using a Kalman filter. The result is an optimal monitoring sensor network; the decision variable is location(s) of sensor in three dimensions. Long term monitoring extends this approach concept, and integrates the spatial-time correlations to optimize the decision variables of where to sample and when to sample over the project life cycle. Optimization of location and timing of samples to meet the desired accuracy of temporal plume movement is accomplished using enumeration or genetic algorithms. The remediation optimization solves the multi-component, multiphase system of equations and incorporates constraints on life-cycle costs, maximum annual costs, maximum allowable annual discharge (for assessing the monitored natural attenuation solution) and constraints on where remedial system component(s) can be located, including management overrides to force certain solutions to be chosen are incorporated for solution design. It uses a suite of optimization techniques, including the outer approximation method, Lipchitz global optimization, genetic algorithms, and the like. The automated optimal remedial design algorithm requires a stable simulator be available for the simulated process. This is commonly the case for all above specifications sans true three-dimensional multiphase flow. Much work is currently being conducted in the industry to develop stable 3D, three-phase simulators. If needed, an interim heuristic algorithm is available to get close to optimal for these conditions. This system process provides the full capability to optimize multi-source, multiphase, and multicomponent sites. The results of applying just components of these algorithms have produced predicted savings of as much as $90,000,000(US), when compared to alternative solutions. Investment in a pilot program to test the model saved 100% of the $20,000,000 predicted for the smaller test implementation. This was done without loss of effectiveness, and received an award from the Vice President - and now Nobel peace prize winner - Al Gore of the United States. (authors)« less

  1. Optimal groundwater security management policies by control of inexact health risks under dual uncertainty in slope factors.

    PubMed

    Lu, Hongwei; Li, Jing; Ren, Lixia; Chen, Yizhong

    2018-05-01

    Groundwater remediation is a complicated system with time-consuming and costly challenges, which should be carefully controlled by appropriate groundwater management. This study develops an integrated optimization method for groundwater remediation management regarding cost, contamination distribution and health risk under multiple uncertainties. The integration of health risk into groundwater remediation optimization management is capable of not only adequately considering the influence of health risk on optimal remediation strategies, but also simultaneously completing remediation optimization design and risk assessment. A fuzzy chance-constrained programming approach is presented to handle multiple uncertain properties in the process of health risk assessment. The capabilities and effectiveness of the developed method are illustrated through an application of a naphthalene contaminated case in Anhui, China. Results indicate that (a) the pump-and-treat remediation system leads to a low naphthalene contamination but high remediation cost for a short-time remediation, and natural attenuation significantly affects naphthalene removal from groundwater for a long-time remediation; (b) the weighting coefficients have significant influences on the remediation cost and the performances both for naphthalene concentrations and health risks; (c) an increased level of slope factor (sf) for naphthalene corresponds to more optimal strategies characterized by higher environmental benefits and lower economic sacrifice. The developed method could be simultaneously beneficial for public health and environmental protection. Decision makers could obtain the most appropriate remediation strategies according to their specific requirements with high flexibility of economic, environmental, and risk concerns. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  3. Global cost and weight evaluation of fuselage keel design concepts

    NASA Technical Reports Server (NTRS)

    Flynn, B. W.; Morris, M. R.; Metschan, S. L.; Swanson, G. D.; Smith, P. J.; Griess, K. H.; Schramm, M. R.; Humphrey, R. J.

    1993-01-01

    The Boeing program entitled Advanced Technology Composite Aircraft Structure (ATCAS) is focused on the application of affordable composite technology to pressurized fuselage structure of future aircraft. As part of this effort, a design study was conducted on the keel section of the aft fuselage. A design build team (DBT) approach was used to identify and evaluate several design concepts which incorporated different material systems, fabrication processes, structural configurations, and subassembly details. The design concepts were developed in sufficient detail to accurately assess their potential for cost and weight savings as compared with a metal baseline representing current wide body technology. The cost and weight results, along with an appraisal of performance and producibility risks, are used to identify a globally optimized keel design; one which offers the most promising cost and weight advantages over metal construction. Lastly, an assessment is given of the potential for further cost and weight reductions of the selected keel design during local optimization.

  4. Culture Moderates Biases in Search Decisions.

    PubMed

    Pattaratanakun, Jake A; Mak, Vincent

    2015-08-01

    Prior studies suggest that people often search insufficiently in sequential-search tasks compared with the predictions of benchmark optimal strategies that maximize expected payoff. However, those studies were mostly conducted in individualist Western cultures; Easterners from collectivist cultures, with their higher susceptibility to escalation of commitment induced by sunk search costs, could exhibit a reversal of this undersearch bias by searching more than optimally, but only when search costs are high. We tested our theory in four experiments. In our pilot experiment, participants generally undersearched when search cost was low, but only Eastern participants oversearched when search cost was high. In Experiments 1 and 2, we obtained evidence for our hypothesized effects via a cultural-priming manipulation on bicultural participants in which we manipulated the language used in the program interface. We obtained further process evidence for our theory in Experiment 3, in which we made sunk costs nonsalient in the search task-as expected, cross-cultural effects were largely mitigated. © The Author(s) 2015.

  5. Reducing Operating Costs by Optimizing Space in Facilities

    DTIC Science & Technology

    2012-03-01

    Base level 5 engineering units will provide facility floor plans, furniture layouts, and staffing documentation as necessary. One obstacle...due to the quantity and diverse locations. Base level engineering units provided facility floor plans, furniture layouts, and staffing documentation... furniture purchases and placement 5. Follow a quality systematic process in all decisions The per person costs can be better understood with a real

  6. Assessing the accuracy of wildland fire situation analysis (WFSA) fire size and suppression cost estimates.

    Treesearch

    Geoffrey H. Donovan; Peter. Noordijk

    2005-01-01

    To determine the optimal suppression strategy for escaped wildfires, federal land managers are requiredto conduct a wildland fire situation analysis (WFSA). As part of the WFSA process, fire managers estimate final fire size and suppression costs. Estimates from 58 WFSAs conducted during the 2002 fire season are compared to actual outcomes. Results indicate that...

  7. Contract management techniques for improving construction quality

    DOT National Transportation Integrated Search

    1997-07-01

    Efforts to improve quality in highway construction embrace many aspects of the construction process. Quality goals include enhanced efficiency and productivity, optimal cost and delivery time, improved performance, and changes in attitude-promoting a...

  8. Throughput Optimization of Continuous Biopharmaceutical Manufacturing Facilities.

    PubMed

    Garcia, Fernando A; Vandiver, Michael W

    2017-01-01

    In order to operate profitably under different product demand scenarios, biopharmaceutical companies must design their facilities with mass output flexibility in mind. Traditional biologics manufacturing technologies pose operational challenges in this regard due to their high costs and slow equipment turnaround times, restricting the types of products and mass quantities that can be processed. Modern plant design, however, has facilitated the development of lean and efficient bioprocessing facilities through footprint reduction and adoption of disposable and continuous manufacturing technologies. These development efforts have proven to be crucial in seeking to drastically reduce the high costs typically associated with the manufacturing of recombinant proteins. In this work, mathematical modeling is used to optimize annual production schedules for a single-product commercial facility operating with a continuous upstream and discrete batch downstream platform. Utilizing cell culture duration and volumetric productivity as process variables in the model, and annual plant throughput as the optimization objective, 3-D surface plots are created to understand the effect of process and facility design on expected mass output. The model shows that once a plant has been fully debottlenecked it is capable of processing well over a metric ton of product per year. Moreover, the analysis helped to uncover a major limiting constraint on plant performance, the stability of the neutralized viral inactivated pool, which may indicate that this should be a focus of attention during future process development efforts. LAY ABSTRACT: Biopharmaceutical process modeling can be used to design and optimize manufacturing facilities and help companies achieve a predetermined set of goals. One way to perform optimization is by making the most efficient use of process equipment in order to minimize the expenditure of capital, labor and plant resources. To that end, this paper introduces a novel mathematical algorithm used to determine the most optimal equipment scheduling configuration that maximizes the mass output for a facility producing a single product. The paper also illustrates how different scheduling arrangements can have a profound impact on the availability of plant resources, and identifies limiting constraints on the plant design. In addition, simulation data is presented using visualization techniques that aid in the interpretation of the scientific concepts discussed. © PDA, Inc. 2017.

  9. Analysis of Intergrade Variables In The Fuzzy C-Means And Improved Algorithm Cat Swarm Optimization(FCM-ISO) In Search Segmentation

    NASA Astrophysics Data System (ADS)

    Saragih, Jepronel; Salim Sitompul, Opim; Situmorang, Zakaria

    2017-12-01

    One of the techniques known in Data Mining namely clustering. Image segmentation process does not always represent the actual image which is caused by a combination of algorithms as long as it has not been able to obtain optimal cluster centers. In this research will search for the smallest error with the counting result of a Fuzzy C Means process optimized with Cat swam Algorithm Optimization that has been developed by adding the weight of the energy in the process of Tracing Mode.So with the parameter can be determined the most optimal cluster centers and most closely with the data will be made the cluster. Weigh inertia in this research, namely: (0.1), (0.2), (0.3), (0.4), (0.5), (0.6), (0.7), (0.8) and (0.9). Then compare the results of each variable values inersia (W) which is different and taken the smallest results. Of this weighting analysis process can acquire the right produce inertia variable cost function the smallest.

  10. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    PubMed

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  11. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  12. Hitting the Optimal Vaccination Percentage and the Risks of Error: Why to Miss Right.

    PubMed

    Harvey, Michael J; Prosser, Lisa A; Messonnier, Mark L; Hutton, David W

    2016-01-01

    To determine the optimal level of vaccination coverage defined as the level that minimizes total costs and explore how economic results change with marginal changes to this level of coverage. A susceptible-infected-recovered-vaccinated model designed to represent theoretical infectious diseases was created to simulate disease spread. Parameter inputs were defined to include ranges that could represent a variety of possible vaccine-preventable conditions. Costs included vaccine costs and disease costs. Health benefits were quantified as monetized quality adjusted life years lost from disease. Primary outcomes were the number of infected people and the total costs of vaccination. Optimization methods were used to determine population vaccination coverage that achieved a minimum cost given disease and vaccine characteristics. Sensitivity analyses explored the effects of changes in reproductive rates, costs and vaccine efficacies on primary outcomes. Further analysis examined the additional cost incurred if the optimal coverage levels were not achieved. Results indicate that the relationship between vaccine and disease cost is the main driver of the optimal vaccination level. Under a wide range of assumptions, vaccination beyond the optimal level is less expensive compared to vaccination below the optimal level. This observation did not hold when the cost of the vaccine cost becomes approximately equal to the cost of disease. These results suggest that vaccination below the optimal level of coverage is more costly than vaccinating beyond the optimal level. This work helps provide information for assessing the impact of changes in vaccination coverage at a societal level.

  13. A stochastic optimal feedforward and feedback control methodology for superagility

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.

    1992-01-01

    A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.

  14. Strategy of arm movement control is determined by minimization of neural effort for joint coordination.

    PubMed

    Dounskaia, Natalia; Shimansky, Yury

    2016-06-01

    Optimality criteria underlying organization of arm movements are often validated by testing their ability to adequately predict hand trajectories. However, kinematic redundancy of the arm allows production of the same hand trajectory through different joint coordination patterns. We therefore consider movement optimality at the level of joint coordination patterns. A review of studies of multi-joint movement control suggests that a 'trailing' pattern of joint control is consistently observed during which a single ('leading') joint is rotated actively and interaction torque produced by this joint is the primary contributor to the motion of the other ('trailing') joints. A tendency to use the trailing pattern whenever the kinematic redundancy is sufficient and increased utilization of this pattern during skillful movements suggests optimality of the trailing pattern. The goal of this study is to determine the cost function minimization of which predicts the trailing pattern. We show that extensive experimental testing of many known cost functions cannot successfully explain optimality of the trailing pattern. We therefore propose a novel cost function that represents neural effort for joint coordination. That effort is quantified as the cost of neural information processing required for joint coordination. We show that a tendency to reduce this 'neurocomputational' cost predicts the trailing pattern and that the theoretically developed predictions fully agree with the experimental findings on control of multi-joint movements. Implications for future research of the suggested interpretation of the trailing joint control pattern and the theory of joint coordination underlying it are discussed.

  15. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  16. Evolutionary and anthropological perspectives on optimal foraging in obesogenic environments.

    PubMed

    Lieberman, Leslie Sue

    2006-07-01

    The nutrition transition has created an obesogenic environment resulting in a growing obesity pandemic. An optimal foraging approach provides cost/benefit models of cognitive, behavioral and physiological strategies that illuminate the causes of caloric surfeit and consequent obesity in current environments of abundant food cues; easy-access and reliable food patches; low processing costs and enormous variety of energy-dense foods. Experimental and naturalistic observations demonstrate that obesogenic environments capitalize on human proclivities by displaying colorful advertising, supersizing meals, providing abundant variety, increasing convenience, and utilizing distractions that impede monitoring of food portions during consumption. The globalization of fast foods propels these trends.

  17. Application of the GA-BP Neural Network in Earthwork Calculation

    NASA Astrophysics Data System (ADS)

    Fang, Peng; Cai, Zhixiong; Zhang, Ping

    2018-01-01

    The calculation of earthwork quantity is the key factor to determine the project cost estimate and the optimization of the scheme. It is of great significance and function in the excavation of earth and rock works. We use optimization principle of GA-BP intelligent algorithm running process, and on the basis of earthwork quantity and cost information database, the design of the GA-BP neural network intelligent computing model, through the network training and learning, the accuracy of the results meet the actual engineering construction of gauge fan requirements, it provides a new approach for other projects the calculation, and has good popularization value.

  18. Optimization of municipal solid waste collection and transportation routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Swapan, E-mail: swapan2009sajal@gmail.com; Bhattacharyya, Bidyut Kr., E-mail: bidyut53@yahoo.co.in

    2015-09-15

    Graphical abstract: Display Omitted - Highlights: • Profitable integrated solid waste management system. • Optimal municipal waste collection scheme between the sources and waste collection centres. • Optimal path calculation between waste collection centres and transfer stations. • Optimal waste routing between the transfer stations and processing plants. - Abstract: Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scattermore » throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length.« less

  19. Propulsion and Power Rapid Response Research and Development (R&D) Support. Task Order 0004: Advanced Propulsion Fuels R&D, Subtask: Optimization of Lipid Production and Processing of Microalgae for the Development of Biofuels

    DTIC Science & Technology

    2013-02-01

    Purified cultures are tested for optimized production under heterotrophic conditions with several organic carbon sources like beet and sorghum juice using ...Moreover, AFRL support sponsored the Master’s in Chemical Engineering project titled “Cost Analysis Of Local Bio- Products Processing Plant Using ...unlimited. 2.5 Screening for High Lipid Production Mutants Procedure: A selection of 84 single colony cultures was analyzed in this phase using the

  20. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  1. A Review of Industrial Heat Exchange Optimization

    NASA Astrophysics Data System (ADS)

    Yao, Junjie

    2018-01-01

    Heat exchanger is an energy exchange equipment, it transfers the heat from a working medium to another working medium, which has been wildly used in petrochemical industry, HVAC refrigeration, aerospace and so many other fields. The optimal design and efficient operation of the heat exchanger and heat transfer network are of great significance to the process industry to realize energy conservation, production cost reduction and energy consumption reduction. In this paper, the optimization of heat exchanger, optimal algorithm and heat exchanger optimization with different objective functions are discussed. Then, optimization of the heat exchanger and the heat exchanger network considering different conditions are compared and analysed. Finally, all the problems discussed are summarized and foresights are proposed.

  2. Optimal Preventive Maintenance Schedule based on Lifecycle Cost and Time-Dependent Reliability

    DTIC Science & Technology

    2011-11-10

    Page 1 of 16 UNCLASSIFIED: Distribution Statement A. Approved for public release. 12IDM-0064 Optimal Preventive Maintenance Schedule based... 1 . INTRODUCTION Customers and product manufacturers demand continued functionality of complex equipment and processes. Degradation of material...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response

  3. Process Mining-Based Method of Designing and Optimizing the Layouts of Emergency Departments in Hospitals.

    PubMed

    Rismanchian, Farhood; Lee, Young Hoon

    2017-07-01

    This article proposes an approach to help designers analyze complex care processes and identify the optimal layout of an emergency department (ED) considering several objectives simultaneously. These objectives include minimizing the distances traveled by patients, maximizing design preferences, and minimizing the relocation costs. Rising demand for healthcare services leads to increasing demand for new hospital buildings as well as renovating existing ones. Operations management techniques have been successfully applied in both manufacturing and service industries to design more efficient layouts. However, high complexity of healthcare processes makes it challenging to apply these techniques in healthcare environments. Process mining techniques were applied to address the problem of complexity and to enhance healthcare process analysis. Process-related information, such as information about the clinical pathways, was extracted from the information system of an ED. A goal programming approach was then employed to find a single layout that would simultaneously satisfy several objectives. The layout identified using the proposed method improved the distances traveled by noncritical and critical patients by 42.2% and 47.6%, respectively, and minimized the relocation costs. This study has shown that an efficient placement of the clinical units yields remarkable improvements in the distances traveled by patients.

  4. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  5. A variation reduction allocation model for quality improvement to minimize investment and quality costs by considering suppliers’ learning curve

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.

    2016-02-01

    Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.

  6. Reduce on the Cost of Photovoltaic Power Generation for Polycrystalline Silicon Solar Cells by Double Printing of Ag/Cu Front Contact Layer

    NASA Astrophysics Data System (ADS)

    Peng, Zhuoyin; Liu, Zhou; Chen, Jianlin; Liao, Lida; Chen, Jian; Li, Cong; Li, Wei

    2018-06-01

    With the development of photovoltaic industry, the cost of photovoltaic power generation has become the significant issue. And the metallization process has decided the cost of original materials and photovoltaic efficiency of the solar cells. Nowadays, double printing process has been introduced instead of one-step printing process for front contact of polycrystalline silicon solar cells, which can effectively improve the photovoltaic conversion efficiency of silicon solar cells. Here, the relative cheap Cu paste has replaced the expensive Ag paste to form Ag/Cu composite front contact of silicon solar cells. The photovoltaic performance and the cost of photovoltaic power generation have been investigated. With the optimization on structure and height of Cu finger layer for Ag/Cu composite double-printed front contact, the silicon solar cells have exhibited a photovoltaic conversion efficiency of 18.41%, which has reduced 3.42 cent per Watt for the cost of photovoltaic power generation.

  7. Economic analysis of pilot-scale production of B-phycoerythrin.

    PubMed

    Torres-Acosta, Mario A; Ruiz-Ruiz, Federico; Aguilar-Yáñez, José M; Benavides, Jorge; Rito-Palomares, Marco

    2016-11-01

    β-Phycoerythrin is a color protein with several applications, from food coloring to molecular labeling. Depending on the application, different purity is required, affecting production cost and price. Different production and purification strategies for B-phycoerythrin have been developed, the most studied are based on the production using Porphyridium cruentum and purified using chromatographic techniques or aqueous two-phase systems. The use of the latter can result in a less expensive and intensive recovery of the protein, but there is lack of a proper economic analysis to study the effect of using aqueous two-phase systems in a scaled-up process. This study analyzed the production of B-Phycoerythrin using real data obtained during the scale-up of a bioprocess using specialized software (BioSolve, Biopharm Services, UK). First, a sensitivity analysis was performed to identify critical parameters for the production cost, then a Monte Carlo analysis to emulate real processes by adding uncertainty to the identified parameters. Next, the bioprocess was analyzed to determine its financial attractiveness and possible optimization strategies were tested and discussed. Results show that aqueous two-phase systems retain their advantages of low cost and intensive recovery (54.56%); the costs of production per gram calculated (before titer optimization: US$15,709 and after optimization: US$2,374) allowed to obtain profit (in the range of US$millions in a 10-year period) for a potential company taking this production method by comparing the production cost against commercial prices. The bioprocess analyzed is a promising and profitable method for the generation of a highly purified B-phycoerythrin. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1472-1479, 2016. © 2016 American Institute of Chemical Engineers.

  8. Production technology optimization of biscuit baked by electric-contact way

    NASA Astrophysics Data System (ADS)

    Sidorenko, G. A.; Popov, V. P.; Khanina, T. V.; Maneeva, E. Sh; Krasnova, M. S.

    2018-03-01

    Electric-contact way of baking allows one to maintain more nutrients used in biscuit making. As a result of the biscuit production technology optimization, it is established that 30-62,5% is an optimal amount of starch brought instead of flour; 184-200% is optimal amount of egg melange; at this a complex indicator of organoleptic properties will be more than 340 degrees, a complex indicator of physical and chemical properties will be more than 3,3 degrees, and specific costs of energy spent on the biscuit electric-contact baking process will be less than 100 W/kg.

  9. The optimization of l-lactic acid production from sweet sorghum juice by mixed fermentation of Bacillus coagulans and Lactobacillus rhamnosus under unsterile conditions.

    PubMed

    Wang, Yong; Chen, Changjing; Cai, Di; Wang, Zheng; Qin, Peiyong; Tan, Tianwei

    2016-10-01

    The cost reduction of raw material and sterilization could increase the economic feasibility of l-lactic acid fermentation, and the development of an cost-effective and efficient process is highly desired. To improve the efficiency of open fermentation by Lactobacillus rhamnosus based on sweet sorghum juice (SSJ) and to overcome sucrose utilization deficiency of Bacillus coagulans, a mixed fermentation was developed. Besides, the optimization of pH, sugar concentration and fermentation medium were also studied. Under the condition of mixed fermentation and controlled pH, a higher yield of 96.3% was achieved, compared to that (68.8%) in sole Lactobacillus rhamnosus fermentation. With an optimized sugar concentration and a stepwise-controlled pH, the l-lactic acid titer, yield and productivity reached 121gL(-1), 94.6% and 2.18gL(-1)h(-1), respectively. Furthermore, corn steep powder (CSP) as a cheap source of nitrogen and salts was proved to be an efficient supplement to SSJ in this process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Optimization of the Electrochemical Extraction and Recovery of Metals from Electronic Waste Using Response Surface Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.

    The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less

  11. Optimization of the Electrochemical Extraction and Recovery of Metals from Electronic Waste Using Response Surface Methodology

    DOE PAGES

    Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.

    2017-06-08

    The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less

  12. Distributed Wind Competitiveness Improvement Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-05-01

    The Competitiveness Improvement Project (CIP) is a periodic solicitation through the U.S. Department of Energy and its National Renewable Energy Laboratory. Manufacturers of small and medium wind turbines are awarded cost-shared grants via a competitive process to optimize their designs, develop advanced manufacturing processes, and perform turbine testing. The goals of the CIP are to make wind energy cost competitive with other distributed generation technology and increase the number of wind turbine designs certified to national testing standards. This fact sheet describes the CIP and funding awarded as part of the project.

  13. Optimal periodic proof test based on cost-effective and reliability criteria

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.

  14. Multi-objective optimization of swash plate forging process parameters for the die wear/service life improvement

    NASA Astrophysics Data System (ADS)

    Hu, X. F.; Wang, L. G.; Wu, H.; Liu, S. S.

    2017-12-01

    For the forging process of the swash plate, the author designed a kind of multi-index orthogonal experiment. Based on the Archard wear model, the influences of billet temperature, die temperature, forming speed, top die hardness and friction coefficient on forming load and die wear were numerically simulated by DEFORM software. Through the analysis of experimental results, the best forging process parameters were optimized and determined, which could effectively reduce the die wear and prolong the die service life. It is significant to increase the practical production of enterprise, especially to reduce the production cost and to promote enterprise profit.

  15. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  16. Method and apparatus for manufacturing gas tags

    DOEpatents

    Gross, K.C.; Laug, M.T.

    1996-12-17

    For use in the manufacture of gas tags employed in a gas tagging failure detection system for a nuclear reactor, a plurality of commercial feed gases each having a respective noble gas isotopic composition are blended under computer control to provide various tag gas mixtures having selected isotopic ratios which are optimized for specified defined conditions such as cost. Using a new approach employing a discrete variable structure rather than the known continuous-variable optimization problem, the computer controlled gas tag manufacturing process employs an analytical formalism from condensed matter physics known as stochastic relaxation, which is a special case of simulated annealing, for input feed gas selection. For a tag blending process involving M tag isotopes with N distinct feed gas mixtures commercially available from an enriched gas supplier, the manufacturing process calculates the cost difference between multiple combinations and specifies gas mixtures which approach the optimum defined conditions. The manufacturing process is then used to control tag blending apparatus incorporating tag gas canisters connected by stainless-steel tubing with computer controlled valves, with the canisters automatically filled with metered quantities of the required feed gases. 4 figs.

  17. Method and apparatus for manufacturing gas tags

    DOEpatents

    Gross, Kenny C.; Laug, Matthew T.

    1996-01-01

    For use in the manufacture of gas tags employed in a gas tagging failure detection system for a nuclear reactor, a plurality of commercial feed gases each having a respective noble gas isotopic composition are blended under computer control to provide various tag gas mixtures having selected isotopic ratios which are optimized for specified defined conditions such as cost. Using a new approach employing a discrete variable structure rather than the known continuous-variable optimization problem, the computer controlled gas tag manufacturing process employs an analytical formalism from condensed matter physics known as stochastic relaxation, which is a special case of simulated annealing, for input feed gas selection. For a tag blending process involving M tag isotopes with N distinct feed gas mixtures commercially available from an enriched gas supplier, the manufacturing process calculates the cost difference between multiple combinations and specifies gas mixtures which approach the optimum defined conditions. The manufacturing process is then used to control tag blending apparatus incorporating tag gas canisters connected by stainless-steel tubing with computer controlled valves, with the canisters automatically filled with metered quantities of the required feed gases.

  18. Low-Cost Detection of Thin Film Stress during Fabrication

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy A.

    2015-01-01

    NASA's Marshall Space Flight Center has developed a simple, cost-effective optical method for thin film stress measurements during growth and/or subsequent annealing processes. Stress arising in thin film fabrication presents production challenges for electronic devices, sensors, and optical coatings; it can lead to substrate distortion and deformation, impacting the performance of thin film products. NASA's technique measures in-situ stress using a simple, noncontact fiber optic probe in the thin film vacuum deposition chamber. This enables real-time monitoring of stress during the fabrication process and allows for efficient control of deposition process parameters. By modifying process parameters in real time during fabrication, thin film stress can be optimized or controlled, improving thin film product performance.

  19. High-efficiency cell concepts on low-cost silicon sheets

    NASA Technical Reports Server (NTRS)

    Bell, R. O.; Ravi, K. V.

    1985-01-01

    The limitations on sheet growth material in terms of the defect structure and minority carrier lifetime are discussed. The effect of various defects on performance are estimated. Given these limitations designs for a sheet growth cell that will make the best of the material characteristics are proposed. Achievement of optimum synergy between base material quality and device processing variables is proposed. A strong coupling exists between material quality and the variables during crystal growth, and device processing variables. Two objectives are outlined: (1) optimization of the coupling for maximum performance at minimal cost; and (2) decoupling of materials from processing by improvement in base material quality to make it less sensitive to processing variables.

  20. Optimal technology investment strategies for a reusable launch vehicle

    NASA Technical Reports Server (NTRS)

    Moore, A. A.; Braun, R. D.; Powell, R. W.

    1995-01-01

    Within the present budgetary environment, developing the technology that leads to an operationally efficient space transportation system with the required performance is a challenge. The present research focuses on a methodology to determine high payoff technology investment strategies. Research has been conducted at Langley Research Center in which design codes for the conceptual analysis of space transportation systems have been integrated in a multidisciplinary design optimization approach. The current study integrates trajectory, propulsion, weights and sizing, and cost disciplines where the effect of technology maturation on the development cost of a single stage to orbit reusable launch vehicle is examined. Results show that the technology investment prior to full-scale development has a significant economic payoff. The design optimization process is used to determine strategic allocations of limited technology funding to maximize the economic payoff.

  1. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  2. Optimal Medical Equipment Maintenance Service Proposal Decision Support System combining Activity Based Costing (ABC) and the Analytic Hierarchy Process (AHP).

    PubMed

    da Rocha, Leticia; Sloane, Elliot; M Bassani, Jose

    2005-01-01

    This study describes a framework to support the choice of the maintenance service (in-house or third party contract) for each category of medical equipment based on: a) the real medical equipment maintenance management system currently used by the biomedical engineering group of the public health system of the Universidade Estadual de Campinas located in Brazil to control the medical equipment maintenance service, b) the Activity Based Costing (ABC) method, and c) the Analytic Hierarchy Process (AHP) method. Results show the cost and performance related to each type of maintenance service. Decision-makers can use these results to evaluate possible strategies for the categories of equipment.

  3. Effect of land tenure and stakeholders attitudes on optimization of conservation practices in agricultural watersheds

    NASA Astrophysics Data System (ADS)

    Piemonti, A. D.; Babbar-Sebens, M.; Luzar, E. J.

    2012-12-01

    Modeled watershed management plans have become valuable tools for evaluating the effectiveness and impacts of conservation practices on hydrologic processes in watersheds. In multi-objective optimization approaches, several studies have focused on maximizing physical, ecological, or economic benefits of practices in a specific location, without considering the relationship between social systems and social attitudes on the overall optimality of the practice at that location. For example, objectives that have been commonly used in spatial optimization of practices are economic costs, sediment loads, nutrient loads and pesticide loads. Though the benefits derived from these objectives are generally oriented towards community preferences, they do not represent attitudes of landowners who might operate their land differently than their neighbors (e.g. farm their own land or rent the land to someone else) and might have different social/personal drivers that motivate them to adopt the practices. In addition, a distribution of such landowners could exist in the watershed, leading to spatially varying preferences to practices. In this study we evaluated the effect of three different land tenure types on the spatial-optimization of conservation practices. To perform the optimization, we used a uniform distribution of land tenure type and a spatially varying distribution of land tenure type. Our results show that for a typical Midwestern agricultural watershed, the most optimal solutions (i.e. highest benefits for minimum economic costs) found were for a uniform distribution of landowners who operate their own land. When a different land-tenure was used for the watershed, the optimized alternatives did not change significantly for nitrates reduction benefits and sediment reduction benefits, but were attained at economic costs much higher than the costs of the landowner who farms her/his own land. For example, landowners who rent to cash-renters would have to spend ~120% higher costs than landowners who operate their own land, to attain the same benefits. We also tested the effect of different social attitudes on the final preferences of the optimized alternatives and its consequences over the total effectiveness of the standard optimization approaches. The results suggest that, for example, when practices were removed from the system due to landowners' attitudes driven by economic profits, then the modified alternatives experienced a decrease in nitrates reduction by 2-50%, and decrease in peak flow reductions by 11-98 %, and decrease in sediments reduction by 20-77%.

  4. [Imaging center - optimization of the imaging process].

    PubMed

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  5. Simulation Modeling of Software Development Processes

    NASA Technical Reports Server (NTRS)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  6. Implementation of new pavement performance prediction models in PMIS : report

    DOT National Transportation Integrated Search

    2012-08-01

    Pavement performance prediction models and maintenance and rehabilitation (M&R) optimization processes : enable managers and engineers to plan and prioritize pavement M&R activities in a cost-effective manner. : This report describes TxDOTs effort...

  7. Theoretical and experimental researches on the operating costs of a wastewater treatment plant

    NASA Astrophysics Data System (ADS)

    Panaitescu, M.; Panaitescu, F.-V.; Anton, I.-A.

    2015-11-01

    Purpose of the work: The total cost of a sewage plants is often determined by the present value method. All of the annual operating costs for each process are converted to the value of today's correspondence and added to the costs of investment for each process, which leads to getting the current net value. The operating costs of the sewage plants are subdivided, in general, in the premises of the investment and operating costs. The latter can be stable (normal operation and maintenance, the establishment of power) or variables (chemical and power sludge treatment and disposal, of effluent charges). For the purpose of evaluating the preliminary costs so that an installation can choose between different alternatives in an incipient phase of a project, can be used cost functions. In this paper will be calculated the operational cost to make several scenarios in order to optimize its. Total operational cost (fixed and variable) is dependent global parameters of wastewater treatment plant. Research and methodology: The wastewater treatment plant costs are subdivided in investment and operating costs. We can use different cost functions to estimate fixed and variable operating costs. In this study we have used the statistical formulas for cost functions. The method which was applied to study the impact of the influent characteristics on the costs is economic analysis. Optimization of plant design consist in firstly, to assess the ability of the smallest design to treat the maximum loading rates to a given effluent quality and, secondly, to compare the cost of the two alternatives for average and maximum loading rates. Results: In this paper we obtained the statistical values for the investment cost functions, operational fixed costs and operational variable costs for wastewater treatment plant and its graphical representations. All costs were compared to the net values. Finally we observe that it is more economical to build a larger plant, especially if maximum loading rates are reached. The actual target of operational management is to directly implement the presented cost functions in a software tool, in which the design of a plant and the simulation of its behaviour are evaluated simultaneously.

  8. The Business Change Initiative: A Novel Approach to Improved Cost and Schedule Management

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Bryson, Jonathan; Klein, Gerald; Lunz-Ruark, Val; Majerowicz, Walt; McKeever, J.; Nair, Param

    2016-01-01

    Goddard Space Flight Center's Flight Projects Directorate employed a Business Change Initiative (BCI) to infuse a series of activities coordinated to drive improved cost and schedule performance across Goddard's missions. This sustaining change framework provides a platform to manage and implement cost and schedule control techniques throughout the project portfolio. The BCI concluded in December 2014, deploying over 100 cost and schedule management changes including best practices, tools, methods, training, and knowledge sharing. The new business approach has driven the portfolio to improved programmatic performance. The last eight launched GSFC missions have optimized cost, schedule, and technical performance on a sustained basis to deliver on time and within budget, returning funds in many cases. While not every future mission will boast such strong performance, improved cost and schedule tools, management practices, and ongoing comprehensive evaluations of program planning and control methods to refine and implement best practices will continue to provide a framework for sustained performance. This paper will describe the tools, techniques, and processes developed during the BCI and the utilization of collaborative content management tools to disseminate project planning and control techniques to ensure continuous collaboration and optimization of cost and schedule management in the future.

  9. Optimization of monitoring and inspections in the life-cycle of wind turbines

    NASA Astrophysics Data System (ADS)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2016-04-01

    The past decade has witnessed a surge in the offshore wind farm developments across the world. Although this form of cleaner and greener energy is beneficial and eco-friendly, the production of wind energy entails high life-cycle costs. The costs associated with inspections, monitoring and repairs of wind turbines are primary contributors to the high costs of electricity produced in this way and are disadvantageous in today's competitive economic environment. There is limited research being done in the probabilistic optimization of life-cycle costs of offshore wind turbines structures and their components. This paper proposes a framework for assessing the life cycle cost of wind turbine structures subject to damage and deterioration. The objective of the paper is to develop a mathematical probabilistic cost assessment framework which considers deterioration, inspection, monitoring, repair and maintenance models and their uncertainties. The uncertainties are etched in the accuracy and precision of the monitoring and inspection methods and can be considered through the probability of damage detection of each method. Schedules for inspection, monitoring and repair actions are demonstrated using a decision tree. Examples of a generalised deterioration process integrated with the cost analysis using a decision tree are shown for a wind turbine foundation structure.

  10. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  11. Toward Low-Cost, High-Energy Density, and High-Power Density Lithium-Ion Batteries

    DOE PAGES

    Li, Jianlin; Du, Zhijia; Ruther, Rose E.; ...

    2017-06-12

    Reducing cost and increasing energy density are two barriers for widespread application of lithium-ion batteries in electric vehicles. Although the cost of electric vehicle batteries has been reduced by ~70% from 2008 to 2015, the current battery pack cost (268/kWh in 2015) is still >2 times what the USABC targets (125/kWh). Even though many advancements in cell chemistry have been realized since the lithium-ion battery was first commercialized in 1991, few major breakthroughs have occurred in the past decade. Therefore, future cost reduction will rely on cell manufacturing and broader market acceptance. Here, this article discusses three major aspects formore » cost reduction: (1) quality control to minimize scrap rate in cell manufacturing; (2) novel electrode processing and engineering to reduce processing cost and increase energy density and throughputs; and (3) material development and optimization for lithium-ion batteries with high-energy density. Insights on increasing energy and power densities of lithium-ion batteries are also addressed.« less

  12. Toward Low-Cost, High-Energy Density, and High-Power Density Lithium-Ion Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jianlin; Du, Zhijia; Ruther, Rose E.

    Reducing cost and increasing energy density are two barriers for widespread application of lithium-ion batteries in electric vehicles. Although the cost of electric vehicle batteries has been reduced by ~70% from 2008 to 2015, the current battery pack cost (268/kWh in 2015) is still >2 times what the USABC targets (125/kWh). Even though many advancements in cell chemistry have been realized since the lithium-ion battery was first commercialized in 1991, few major breakthroughs have occurred in the past decade. Therefore, future cost reduction will rely on cell manufacturing and broader market acceptance. Here, this article discusses three major aspects formore » cost reduction: (1) quality control to minimize scrap rate in cell manufacturing; (2) novel electrode processing and engineering to reduce processing cost and increase energy density and throughputs; and (3) material development and optimization for lithium-ion batteries with high-energy density. Insights on increasing energy and power densities of lithium-ion batteries are also addressed.« less

  13. Toward Low-Cost, High-Energy Density, and High-Power Density Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Li, Jianlin; Du, Zhijia; Ruther, Rose E.; AN, Seong Jin; David, Lamuel Abraham; Hays, Kevin; Wood, Marissa; Phillip, Nathan D.; Sheng, Yangping; Mao, Chengyu; Kalnaus, Sergiy; Daniel, Claus; Wood, David L.

    2017-09-01

    Reducing cost and increasing energy density are two barriers for widespread application of lithium-ion batteries in electric vehicles. Although the cost of electric vehicle batteries has been reduced by 70% from 2008 to 2015, the current battery pack cost (268/kWh in 2015) is still >2 times what the USABC targets (125/kWh). Even though many advancements in cell chemistry have been realized since the lithium-ion battery was first commercialized in 1991, few major breakthroughs have occurred in the past decade. Therefore, future cost reduction will rely on cell manufacturing and broader market acceptance. This article discusses three major aspects for cost reduction: (1) quality control to minimize scrap rate in cell manufacturing; (2) novel electrode processing and engineering to reduce processing cost and increase energy density and throughputs; and (3) material development and optimization for lithium-ion batteries with high-energy density. Insights on increasing energy and power densities of lithium-ion batteries are also addressed.

  14. Economic feasibility and environmental impact of synthetic spider silk production from escherichia coli.

    PubMed

    Edlund, Alan M; Jones, Justin; Lewis, Randolph; Quinn, Jason C

    2018-05-25

    Major ampullate spider silk represents a promising protein-based biomaterial with diverse commercial potential ranging from textiles to medical devices due to its excellent physical and thermal properties. Recent advancements in synthetic biology have facilitated the development of recombinant spider silk proteins from Escherichia coli (E. coli). This study specifically investigates the economic feasibility and environmental impact of synthetic spider silk manufacturing. Pilot scale data was used to validate an engineering process model that includes all of the required sub-processing steps for synthetic fiber manufacture: production, harvesting, purification, drying, and spinning. Modeling was constructed modularly to support assessment of alternative downstream processing technologies. The techno-economic analysis indicates a minimum sale price from pioneer and optimized E. coli plants of $761 kg -1 and $23 kg -1 with greenhouse gas emissions of 572 kg CO 2-eq. kg -1 and 55 kg CO 2-eq. kg -1 , respectively. Elevated costs and emissions from the pioneer plant can be directly tied to the high material consumption and low protein yield. Decreased production costs associated with the optimized plant includes improved protein yield, process optimization, and an N th plant assumption. Discussion focuses on the commercial potential of spider silk, the production performance requirements for commercialization, and the impact of alternative technologies on the system. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Cost considerations for long-term ecological monitoring

    USGS Publications Warehouse

    Caughlan, L.; Oakley, K.L.

    2001-01-01

    For an ecological monitoring program to be successful over the long-term, the perceived benefits of the information must justify the cost. Financial limitations will always restrict the scope of a monitoring program, hence the program's focus must be carefully prioritized. Clearly identifying the costs and benefits of a program will assist in this prioritization process, but this is easier said than done. Frequently, the true costs of monitoring are not recognized and are, therefore, underestimated. Benefits are rarely evaluated, because they are difficult to quantify. The intent of this review is to assist the designers and managers of long-term ecological monitoring programs by providing a general framework for building and operating a cost-effective program. Previous considerations of monitoring costs have focused on sampling design optimization. We present cost considerations of monitoring in a broader context. We explore monitoring costs, including both budgetary costs--what dollars are spent on--and economic costs, which include opportunity costs. Often, the largest portion of a monitoring program budget is spent on data collection, and other, critical aspects of the program, such as scientific oversight, training, data management, quality assurance, and reporting, are neglected. Recognizing and budgeting for all program costs is therefore a key factor in a program's longevity. The close relationship between statistical issues and cost is discussed, highlighting the importance of sampling design, replication and power, and comparing the costs of alternative designs through pilot studies and simulation modeling. A monitoring program development process that includes explicit checkpoints for considering costs is presented. The first checkpoint occur during the setting of objectives and during sampling design optimization. The last checkpoint occurs once the basic shape of the program is known, and the costs and benefits, or alternatively the cost-effectiveness, of each program element can be evaluated. Moving into the implementation phase without careful evaluation of costs and benefits is risky because if costs are later found to exceed benefits, the program will fail. The costs of development, which can be quite high, will have been largely wasted. Realistic expectations of costs and benefits will help ensure that monitoring programs survive the early, turbulent stages of development and the challenges posed by fluctuating budgets during implementation.

  16. Reduced cost and improved figure of sapphire optical components

    NASA Astrophysics Data System (ADS)

    Walters, Mark; Bartlett, Kevin; Brophy, Matthew R.; DeGroote Nelson, Jessica; Medicus, Kate

    2015-10-01

    Sapphire presents many challenges to optical manufacturers due to its high hardness and anisotropic properties. Long lead times and high prices are the typical result of such challenges. The cost of even a simple 'grind and shine' process can be prohibitive. The high precision surfaces required by optical sensor applications further exacerbate the challenge of processing sapphire thereby increasing cost further. Optimax has demonstrated a production process for such windows that delivers over 50% time reduction as compared to traditional manufacturing processes for sapphire, while producing windows with less than 1/5 wave rms figure error. Optimax's sapphire production process achieves significant improvement in cost by implementation of a controlled grinding process to present the best possible surface to the polishing equipment. Following the grinding process is a polishing process taking advantage of chemical interactions between slurry and substrate to deliver excellent removal rates and surface finish. Through experiments, the mechanics of the polishing process were also optimized to produce excellent optical figure. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. Through specially developed polishing slurries, the peak-to-valley figure error of spherical sapphire parts is reduced by over 80%.

  17. The latest developments and outlook for hydrogen liquefaction technology

    NASA Astrophysics Data System (ADS)

    Ohlig, K.; Decker, L.

    2014-01-01

    Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence higher operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.

  18. Boiler tuning using SPO at Detroit Edison`s River Rouge plant for best economic performance while minimizing NO{sub x} emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haman, R.L.; Kerry, T.G.; Jarc, C.A.

    1996-12-31

    A technology provided by Ultramax Corporation and EPRI, based on sequential process optimization (SPO), is being used as a cost-effective tool to gain improvements prior to decisions for capital-intensive solutions. This empirical method of optimization, called the ULTRAMAX{reg_sign} Method, can determine the best boiler capabilities and help delay, or even avoid, expensive retrofits or repowering. SPO can serve as a least-cost way to attain the right degree of compliance with current and future phases of CAAA. Tuning ensures a staged strategy to stay ahead of emissions regulations, but not so far ahead as to cause regret for taking actions thatmore » ultimately are not mandated or warranted. One large utility investigating SPO as a tool to lower NO{sub x} emissions and to optimize boiler performance is Detroit Edison. The company has applied SPO to tune two coal-fired units at its River Rouge Power Plant to evaluate the technology for possible system-wide usage. Following the successful demonstration in reducing NO{sub x} from these units, SPO is being considered for use in other Detroit Edison fossil-fired plants. Tuning first will be used as a least-cost option to drive NO{sub x} to its lowest level with operating adjustment. In addition, optimization shows the true capability of the units and the margins available when the Phase 2 rules become effective in 2000. This paper includes a case study of the second tuning process and discusses the opportunities the technology affords.« less

  19. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  20. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  1. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  2. The optimization air separation plants for combined cycle MHD-power plant applications

    NASA Technical Reports Server (NTRS)

    Juhasz, A. J.; Springmann, H.; Greenberg, R.

    1980-01-01

    Some of the design approaches being employed during a current supported study directed at developing an improved air separation process for the production of oxygen enriched air for magnetohydrodynamics (MHD) combustion are outlined. The ultimate objective is to arrive at conceptual designs of air separation plants, optimized for minimum specific power consumption and capital investment costs, for integration with MHD combined cycle power plants.

  3. Development of a solar-powered residential air conditioner

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The initial objective of the program was the optimization (in terms of cost and performance) of a Rankine cycle mechanical refrigeration system which utilizes thermal energy from a flat solar collector for air conditioning residential buildings. However, feasibility investigations of the adsorption process revealed that a dessicant-type air conditioner offers many significant advantages. As a result, limited efforts were expended toward the optimization of such a system.

  4. A Cost-Effective Approach to Optimizing Microstructure and Magnetic Properties in Ce17Fe78B₆ Alloys.

    PubMed

    Tan, Xiaohua; Li, Heyun; Xu, Hui; Han, Ke; Li, Weidan; Zhang, Fang

    2017-07-28

    Optimizing fabrication parameters for rapid solidification of Re-Fe-B (Re = Rare earth) alloys can lead to nanocrystalline products with hard magnetic properties without any heat-treatment. In this work, we enhanced the magnetic properties of Ce 17 Fe 78 B₆ ribbons by engineering both the microstructure and volume fraction of the Ce₂Fe 14 B phase through optimization of the chamber pressure and the wheel speed necessary for quenching the liquid. We explored the relationship between these two parameters (chamber pressure and wheel speed), and proposed an approach to identifying the experimental conditions most likely to yield homogenous microstructure and reproducible magnetic properties. Optimized experimental conditions resulted in a microstructure with homogeneously dispersed Ce₂Fe 14 B and CeFe₂ nanocrystals. The best magnetic properties were obtained at a chamber pressure of 0.05 MPa and a wheel speed of 15 m·s -1 . Without the conventional heat-treatment that is usually required, key magnetic properties were maximized by optimization processing parameters in rapid solidification of magnetic materials in a cost-effective manner.

  5. A framework for designing and analyzing binary decision-making strategies in cellular systems†

    PubMed Central

    Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.

    2015-01-01

    Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552

  6. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.

    PubMed

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-21

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  7. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  8. [A program for optimizing the use of antimicrobials (PROA): experience in a regional hospital].

    PubMed

    Ugalde-Espiñeira, J; Bilbao-Aguirregomezcorta, J; Sanjuan-López, A Z; Floristán-Imízcoz, C; Elorduy-Otazua, L; Viciola-García, M

    2016-08-01

    Programs for optimizing the use of antibiotics (PROA) or antimicrobial stewardship programs are multidisciplinary programs developed in response to the increase of antibiotic resistant bacteria, the objective of which are to improve clinical results, to minimize adverse events and to reduce costs associated with the use of antimicrobials. The implementation of a PROA program in a 128-bed general hospital and the results obtained at 6 months are here reported. An intervention quasi-experimental study with historical control group was designed with the objective of assessing the impact of a PROA program with a non-restrictive intervention model to help prescription, with a direct and bidirectional intervention. The basis of the program is an optimization audit of the use of antimicrobials with not imposed personalized recommendations and the use of information technologies applied to this setting. The impact on the pharmaceutical consumption and costs, cost per process, mean hospital stay, percentage of readmissions to the hospital are described. A total of 307 audits were performed. In 65.8% of cases, treatment was discontinued between the 7th and the 10th day. The main reasons of treatment discontinuation were completeness of treatment (43.6%) and lack of indication (14.7%). The reduction of pharmaceutical expenditure was 8.59% (P = 0.049) and 5.61% of the consumption in DDD/100 stays (P=0.180). The costs by processes in general surgery showed a 3.14% decrease (p=0.000). The results obtained support the efficiency of these programs in small size hospitals with limited resources.

  9. The choice: Welding with CO2 or Nd:YAG lasers

    NASA Astrophysics Data System (ADS)

    Leong, Keng H.

    The recent commercial availability of multi-kilowatt Nd:YAG lasers has opened new avenues for rapid laser processing as well as intensified the competition (cost effectiveness) between CO2 and Nd:YAG laser systems. Vendors offering Nd:YAG laser systems may claim lower operating costs (than CO2) and fiberoptic beam delivery flexibility while CO2 systems vendors may emphasize lower capital cost and well established processing requirements and experience. The capital and operating costs of a laser system are impacted by demand and supply economics and technological advances. Frequently the total cost of a workcell using a laser for processing has to be considered rather than the laser system alone. Consequently it is not very practical to approach the selection of a laser system based on its capital cost and estimated operating cost only. This presentation describes a more pragmatic approach to aid the user in the selection of the optimal multi-kilowatt laser system for a particular processing requirement with emphasis on welding. CO2 laser systems are well established on the factory floor. Consequently, emphasis is given to the comparative application of Nd:YAG lasers, process requirements and performance. Requirements for the laser welding of different metals are examined in the context of hardware (laser system and beam delivery) selection and examples of welding speeds that can be achieved using CO2 and Nd:YAG lasers are examined.

  10. An Analysis of the Hidden Costs of Competition in the Procurement of Spare Parts at the Navy Ships Parts Control Center: A Framework for Process Improvement

    DTIC Science & Technology

    1992-03-01

    setting of sub- optimal goals and quotas, barriers between departments, and awarding contracts primarily on price are all anti-TQM practices that hinder...customer focus, the setting of sub- optimal goals and quotas, barriers between departments, and awarding contracts primarily on price are all anti-TQM/L...surveys are often required to determine if a lower competitive price could be achieved before exercising options. This requirement is a sub- optimal

  11. Putting the process of care into practice.

    PubMed

    Houck, S; Baum, N

    1997-01-01

    "Putting the process of care into practice" provides an interactive, visual model of outpatient resources and processes. It illustrates an episode of care from a fee-for-service as well as managed care perspective. The Care Process Matrix can be used for planning and staffing, as well as retrospectively to assess appropriate resource use within a practice. It identifies effective strategies for reducing the cost per episode of care and optimizing quality while moving from managing costs to managing the care process. Because of an overbuilt health care system, including an oversupply of physicians, success in the future will require redesigning the process of care and a coherent customer service strategy. The growing complexities of practice will require physicians to focus on several key competencies while outsourcing other functions such as billing and contracting.

  12. An integrated radar model solution for mission level performance and cost trades

    NASA Astrophysics Data System (ADS)

    Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia

    2017-05-01

    A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.

  13. Algorithms and analyses for stochastic optimization for turbofan noise reduction using parallel reduced-order modeling

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Gunzburger, Max

    2017-06-01

    Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.

  14. AN ADVANCED SYSTEM FOR POLLUTION PREVENTION IN CHEMICAL COMPLEXES

    EPA Science Inventory

    One important accomplishment is that the system will give process engineers interactively and simultaneously use of programs for total cost analysis, life cycle assessment and sustainability metrics to provide direction for the optimal chemical complex analysis pro...

  15. Post-treatment of molasses wastewater by electrocoagulation and process optimization through response surface analysis.

    PubMed

    Tsioptsias, C; Petridis, D; Athanasakis, N; Lemonidis, I; Deligiannis, A; Samaras, P

    2015-12-01

    Molasses wastewater is a high strength effluent of food industry such as distilleries, sugar and yeast production plants etc. It is characterized by a dark brown color and exhibits a high content in substances of recalcitrant nature such as melanoidins. In this study, electrocoagulation (EC) was studied as a post treatment step for biologically treated molasses wastewater with high nitrogen content obtained from a baker's yeast industry. Iron and copper electrodes were used in various forms; the influence and interaction of current density, molasses wastewater dilution, and reaction time, on COD, color, ammonium and nitrate removal rates and operating cost were studied and optimized through Box Behnken's response surface analysis. Reaction time varied from 0.5 to 4 h, current density varied from 5 to 40 mA/cm(2) and dilution from 0 to 90% (v/v expressed as water concentration). pH, conductivity and temperature measurements were also carried out during each experiment. From preliminary experiments, it was concluded that the application of aeration and sample dilution, considerably influenced the kinetics of the process. The obtained results showed that COD removal varied between 10 and 54%, corresponding to an operation cost ranging from 0.2 to 33 euro/kg COD removed. Significant removal rates were obtained for nitrogen as nitrate and ammonium (i.e. 70% ammonium removal). A linear relation of COD and ammonium to the design parameters was observed, while operation cost and nitrate removal responded in a curvilinear function. A low ratio of electrode surface to treated volume was used, associated to a low investment cost; in addition, iron wastes could be utilized as low cost electrodes i.e. iron fillings from lathes, aiming to a low operation cost due to electrodes replacement. In general, electrocoagulation proved to be an effective and low cost process for biologically treated molasses-wastewater treatment for additional removal of COD and nitrogen content and color reduction. Treated effluent samples with good quality were produced by EC, with COD, NH4-N and NO3-N concentrations of 180, 52 and 2 mg/l respectively. Response surface analysis revealed that optimized conditions could be established under moderate molasses wastewater dilution, (e.g. 45%), at 3.5 h treatment time and 33 mA/cm(2) current density. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. 2010 Annual Report (National Defense Center for Energy and Environment)

    DTIC Science & Technology

    2010-01-01

    a prototype system for reforming JP-8 to enable the use of fuel cells in theater. Energy, water, and waste reduction are tied to NDCEE efforts to...modernize Army ammunition plants , reducing costs and ensuring a steady supply of ammunition to the warfighter. At Fort Campbell, KY, newly...operating costs at Holston and Radford Army Ammunition Plants . Weapon systems ESOHE efforts also included optimizing various depot maintenance processes

  17. Economic and environmental optimization of a multi-site utility network for an industrial complex.

    PubMed

    Kim, Sang Hun; Yoon, Sung-Geun; Chae, Song Hwa; Park, Sunwon

    2010-01-01

    Most chemical companies consume a lot of steam, water and electrical resources in the production process. Given recent record fuel costs, utility networks must be optimized to reduce the overall cost of production. Environmental concerns must also be considered when preparing modifications to satisfy the requirements for industrial utilities, since wastes discharged from the utility networks are restricted by environmental regulations. Construction of Eco-Industrial Parks (EIPs) has drawn attention as a promising approach for retrofitting existing industrial parks to improve energy efficiency. The optimization of the utility network within an industrial complex is one of the most important undertakings to minimize energy consumption and waste loads in the EIP. In this work, a systematic approach to optimize the utility network of an industrial complex is presented. An important issue in the optimization of a utility network is the desire of the companies to achieve high profits while complying with the environmental regulations. Therefore, the proposed optimization was performed with consideration of both economic and environmental factors. The proposed approach consists of unit modeling using thermodynamic principles, mass and energy balances, development of a multi-period Mixed Integer Linear Programming (MILP) model for the integration of utility systems in an industrial complex, and an economic/environmental analysis of the results. This approach is applied to the Yeosu Industrial Complex, considering seasonal utility demands. The results show that both the total utility cost and waste load are reduced by optimizing the utility network of an industrial complex. 2009 Elsevier Ltd. All rights reserved.

  18. Two-phase strategy of controlling motor coordination determined by task performance optimality.

    PubMed

    Shimansky, Yury P; Rand, Miya K

    2013-02-01

    A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.

  19. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  20. Tobacco BY-2 Media Component Optimization for a Cost-Efficient Recombinant Protein Production

    PubMed Central

    Häkkinen, Suvi T.; Reuter, Lauri; Nuorti, Ninni; Joensuu, Jussi J.; Rischer, Heiko; Ritala, Anneli

    2018-01-01

    Plant cells constitute an attractive platform for production of recombinant proteins as more and more animal-free products and processes are desired. One of the challenges in using plant cells as production hosts has been the costs deriving from expensive culture medium components. In this work, the aim was to optimize the levels of most expensive components in the nutrient medium without compromising the accumulation of biomass and recombinant protein yields. Wild-type BY-2 culture and transgenic tobacco BY-2 expressing green fluorescent protein–Hydrophobin I (GFP-HFBI) fusion protein were used to determine the most inexpensive medium composition. One particularly high-accumulating BY-2 clone, named ‘Hulk,’ produced 1.1 ± 0.2 g/l GFP-HFBI in suspension and kept its high performance during prolonged subculturing. In addition, both cultures were successfully cryopreserved enabling truly industrial application of this plant cell host. With the optimized culture medium, 43–55% cost reduction with regard to biomass and up to 69% reduction with regard to recombinant protein production was achieved. PMID:29434617

  1. Tobacco BY-2 Media Component Optimization for a Cost-Efficient Recombinant Protein Production.

    PubMed

    Häkkinen, Suvi T; Reuter, Lauri; Nuorti, Ninni; Joensuu, Jussi J; Rischer, Heiko; Ritala, Anneli

    2018-01-01

    Plant cells constitute an attractive platform for production of recombinant proteins as more and more animal-free products and processes are desired. One of the challenges in using plant cells as production hosts has been the costs deriving from expensive culture medium components. In this work, the aim was to optimize the levels of most expensive components in the nutrient medium without compromising the accumulation of biomass and recombinant protein yields. Wild-type BY-2 culture and transgenic tobacco BY-2 expressing green fluorescent protein-Hydrophobin I (GFP-HFBI) fusion protein were used to determine the most inexpensive medium composition. One particularly high-accumulating BY-2 clone, named 'Hulk,' produced 1.1 ± 0.2 g/l GFP-HFBI in suspension and kept its high performance during prolonged subculturing. In addition, both cultures were successfully cryopreserved enabling truly industrial application of this plant cell host. With the optimized culture medium, 43-55% cost reduction with regard to biomass and up to 69% reduction with regard to recombinant protein production was achieved.

  2. Optimal waste-to-energy strategy assisted by GIS For sustainable solid waste management

    NASA Astrophysics Data System (ADS)

    Tan, S. T.; Hashim, H.

    2014-02-01

    Municipal solid waste (MSW) management has become more complex and costly with the rapid socio-economic development and increased volume of waste. Planning a sustainable regional waste management strategy is a critical step for the decision maker. There is a great potential for MSW to be used for the generation of renewable energy through waste incineration or landfilling with gas capture system. However, due to high processing cost and cost of resource transportation and distribution throughout the waste collection station and power plant, MSW is mostly disposed in the landfill. This paper presents an optimization model incorporated with GIS data inputs for MSW management. The model can design the multi-period waste-to-energy (WTE) strategy to illustrate the economic potential and tradeoffs for MSW management under different scenarios. The model is capable of predicting the optimal generation, capacity, type of WTE conversion technology and location for the operation and construction of new WTE power plants to satisfy the increased energy demand by 2025 in the most profitable way. Iskandar Malaysia region was chosen as the model city for this study.

  3. A Goal Programming Optimization Model for The Allocation of Liquid Steel Production

    NASA Astrophysics Data System (ADS)

    Hapsari, S. N.; Rosyidi, C. N.

    2018-03-01

    This research was conducted in one of the largest steel companies in Indonesia which has several production units and produces a wide range of steel products. One of the important products in the company is billet steel. The company has four Electric Arc Furnace (EAF) which produces liquid steel which must be procesed further to be billet steel. The billet steel plant needs to make their production process more efficient to increase the productvity. The management has four goals to be achieved and hence the optimal allocation of the liquid steel production is needed to achieve those goals. In this paper, a goal programming optimization model is developed to determine optimal allocation of liquid steel production in each EAF, to satisfy demand in 3 periods and the company goals, namely maximizing the volume of production, minimizing the cost of raw materials, minimizing maintenance costs, maximizing sales revenues, and maximizing production capacity. From the results of optimization, only maximizing production capacity goal can not achieve the target. However, the model developed in this papare can optimally allocate liquid steel so the allocation of production does not exceed the maximum capacity of the machine work hours and maximum production capacity.

  4. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    NASA Astrophysics Data System (ADS)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  5. A method of network topology optimization design considering application process characteristic

    NASA Astrophysics Data System (ADS)

    Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo

    2018-03-01

    Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.

  6. Out of control little-used clinical assets are draining healthcare budgets.

    PubMed

    Horblyuk, Ruslan; Kaneta, Kristopher; McMillen, Gary L; Mullins, Christopher; O'Brien, Thomas M; Roy, Ankita

    2012-07-01

    To improve utilization and reduce the cost of maintaining mobile clinical equipment, healthcare organization leaders should do the following: Select an initial asset group to target. Conduct a physical inventory. Evaluate the organization's asset "ecosystem." Optimize workflow processes. Phase in new processes, and phase out inventory. Devote time to change management. Develop a replacement strategy.

  7. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  8. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  9. Optimization of extrusion conditions for the production of instant grain amaranth-based porridge flour.

    PubMed

    Akande, Olamide A; Nakimbugwe, Dorothy; Mukisa, Ivan M

    2017-11-01

    Malnutrition is one of the foremost causes of death among children below 5 years in developing countries. Development of nutrient-dense food formulations using locally available crops has been proposed as a means to combat this menace. This study optimized the extrusion process for the production of a nutritious amaranth-based porridge flour. Least cost formulations containing grain amaranth, groundnut, iron-rich beans, pumpkin, orange-fleshed sweet potato, carrot, and maize were developed and evaluated by a sensory panel ( n  = 30) for acceptability using the 9-point hedonic scale. Extrusion process of the most acceptable porridge flour was optimized by response surface methodology (RSM). Barrel temperature (130-170°C) and feed moisture content (14%-20%) were the independent variables which significantly ( p  < .05) affected in vitro protein digestibility, vitamin A retention, total polyphenol, phytic content, and iron and zinc extractabilities. Optimization of the extrusion process improved the nutritional quality of the instant flour.

  10. The development of multi-objective optimization model for excess bagasse utilization: A case study for Thailand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buddadee, Bancha; Wirojanagud, Wanpen; Watts, Daniel J.

    In this paper, a multi-objective optimization model is proposed as a tool to assist in deciding for the proper utilization scheme of excess bagasse produced in sugarcane industry. Two major scenarios for excess bagasse utilization are considered in the optimization. The first scenario is the typical situation when excess bagasse is used for the onsite electricity production. In case of the second scenario, excess bagasse is processed for the offsite ethanol production. Then the ethanol is blended with an octane rating of 91 gasoline by a portion of 10% and 90% by volume respectively and the mixture is used asmore » alternative fuel for gasoline vehicles in Thailand. The model proposed in this paper called 'Environmental System Optimization' comprises the life cycle impact assessment of global warming potential (GWP) and the associated cost followed by the multi-objective optimization which facilitates in finding out the optimal proportion of the excess bagasse processed in each scenario. Basic mathematical expressions for indicating the GWP and cost of the entire process of excess bagasse utilization are taken into account in the model formulation and optimization. The outcome of this study is the methodology developed for decision-making concerning the excess bagasse utilization available in Thailand in view of the GWP and economic effects. A demonstration example is presented to illustrate the advantage of the methodology which may be used by the policy maker. The methodology developed is successfully performed to satisfy both environmental and economic objectives over the whole life cycle of the system. It is shown in the demonstration example that the first scenario results in positive GWP while the second scenario results in negative GWP. The combination of these two scenario results in positive or negative GWP depending on the preference of the weighting given to each objective. The results on economics of all scenarios show the satisfied outcomes.« less

  11. Efficient use of animal manure on cropland--economic analysis.

    PubMed

    Araji, A A; Abdo, Z O; Joyce, P

    2001-09-01

    Manure contains all the macro- and microelements needed for plant growth; however, it represents one of the most underutilized resources in the US. The major problem with the use of manure on cropland is the direct effect of its composition on application cost. This cost is a function of the mineralization process of organic matter. The mineralization process is influenced by the properties of the manure, properties of the soil, moisture, and temperature. This study evaluates the simultaneous effect of these variables on the optimal use of manure on cropland. The results show that the properties of manure and soil significantly affect the mineralization of organic nitrogen and thus the optimal quantity of manure required to satisfy the nutrient requirement of crops in a given rotation system. Manure application costs range from a low of 18% of the cost of commercial fertilizer for chicken manure applied to one type of soil, to a high of 125% of the cost of commercial fertilizer for cow manure applied to another type of soil. The maximum distance to transfer manure to the field, that will equate its application cost to the cost of commercial fertilizer, ranges from a high of 35 km (22 miles) for chicken manure applied to one type of soil, to a low of 1 km (0.62 miles) for cow manure applied to another type of soil. For rotation system 2, manure application costs range from a low of 37% of the cost of commercial fertilizer for chicken manure applied to one type of soil, to a high of 136% of the cost of commercial fertilizer for cow manure applied to another type of soil. The maximum distance to transfer manure to the field, that will equate its cost to the cost of commercial fertilizer, ranges from a high of 20 km (12.5 miles) for chicken manure applied to one type of soil, to a low of 0 km (0 miles) for cow manure applied to another type of soil.

  12. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  13. Applying simulation to optimize plastic molded optical parts

    NASA Astrophysics Data System (ADS)

    Jaworski, Matthew; Bakharev, Alexander; Costa, Franco; Friedl, Chris

    2012-10-01

    Optical injection molded parts are used in many different industries including electronics, consumer, medical and automotive due to their cost and performance advantages compared to alternative materials such as glass. The injection molding process, however, induces elastic (residual stress) and viscoelastic (flow orientation stress) deformation into the molded article which alters the material's refractive index to be anisotropic in different directions. Being able to predict and correct optical performance issues associated with birefringence early in the design phase is a huge competitive advantage. This paper reviews how to apply simulation analysis of the entire molding process to optimize manufacturability and part performance.

  14. Rational design and optimization of downstream processes of virus particles for biopharmaceutical applications: current advances.

    PubMed

    Vicente, Tiago; Mota, José P B; Peixoto, Cristina; Alves, Paula M; Carrondo, Manuel J T

    2011-01-01

    The advent of advanced therapies in the pharmaceutical industry has moved the spotlight into virus-like particles and viral vectors produced in cell culture holding great promise in a myriad of clinical targets, including cancer prophylaxis and treatment. Even though a couple of cases have reached the clinic, these products have yet to overcome a number of biological and technological challenges before broad utilization. Concerning the manufacturing processes, there is significant research focusing on the optimization of current cell culture systems and, more recently, on developing scalable downstream processes to generate material for pre-clinical and clinical trials. We review the current options for downstream processing of these complex biopharmaceuticals and underline current advances on knowledge-based toolboxes proposed for rational optimization of their processing. Rational tools developed to increase the yet scarce knowledge on the purification processes of complex biologicals are discussed as alternative to empirical, "black-boxed" based strategies classically used for process development. Innovative methodologies based on surface plasmon resonance, dynamic light scattering, scale-down high-throughput screening and mathematical modeling for supporting ion-exchange chromatography show great potential for a more efficient and cost-effective process design, optimization and equipment prototyping. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Conceptual framework of Tenaga Nasional Berhad (TNB) cost of service (COS) model

    NASA Astrophysics Data System (ADS)

    Zainudin, WNRA; Ishak, WWM; Sulaiman, NA

    2017-09-01

    One of Malaysia Electricity Supply Industry (MESI) objectives is to ensure Tenaga Nasional Berhad (TNB) economic viability based on a fair economic electricity pricing. In meeting such objective, a framework that investigates the effect of cost of service (COS) on revenue is in great need. This paper attempts to present a conceptual framework that illustrate the distribution of the COS among TNB’s various cost centres which are subsequently redistributed in varying quantities among all of its customer categories. A deep understanding on the concepts will ensure optimal allocation of COS elements between different sub activities of energy production processes can be achieved. However, this optimal allocation needs to be achieved with respect to the imposed TNB revenue constraint. Therefore, the methodology used for this conceptual approach is being modelled into four steps. Firstly, TNB revenue requirement is being examined to ensure the conceptual framework addressed the requirement properly. Secondly, the revenue requirement is unbundled between three major cost centres or business units consist of generation, transmission and distribution and the cost is classified based on demand, energy and customers related charges. Finally, the classified costs are being allocated to different customer categories i.e. Household, Commercial, and Industrial. In summary, this paper proposed a conceptual framework on the cost of specific services that TNB currently charging its customers and served as potential input into the process of developing revised electricity tariff rates. On that purpose, the finding of this COS study finds cost to serve customer varies with the voltage level that customer connected to, the timing and the magnitude of customer demand on the system. This COS conceptual framework could potentially be integrated into a particular tariff structure and serve as a useful tool for TNB.

  16. System Evaluation and Life-Cycle Cost Analysis of a Commercial-Scale High-Temperature Electrolysis Hydrogen Production Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwin A. Harvego; James E. O'Brien; Michael G. McKellar

    2012-11-01

    Results of a system evaluation and lifecycle cost analysis are presented for a commercial-scale high-temperature electrolysis (HTE) central hydrogen production plant. The plant design relies on grid electricity to power the electrolysis process and system components, and industrial natural gas to provide process heat. The HYSYS process analysis software was used to evaluate the reference central plant design capable of producing 50,000 kg/day of hydrogen. The HYSYS software performs mass and energy balances across all components to allow optimization of the design using a detailed process flow sheet and realistic operating conditions specified by the analyst. The lifecycle cost analysismore » was performed using the H2A analysis methodology developed by the Department of Energy (DOE) Hydrogen Program. This methodology utilizes Microsoft Excel spreadsheet analysis tools that require detailed plant performance information (obtained from HYSYS), along with financial and cost information to calculate lifecycle costs. The results of the lifecycle analyses indicate that for a 10% internal rate of return, a large central commercial-scale hydrogen production plant can produce 50,000 kg/day of hydrogen at an average cost of $2.68/kg. When the cost of carbon sequestration is taken into account, the average cost of hydrogen production increases by $0.40/kg to $3.08/kg.« less

  17. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  18. Artificial Intelligence-Based Models for the Optimal and Sustainable Use of Groundwater in Coastal Aquifers

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Datta, Bithin

    2011-07-01

    Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.

  19. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  20. Impulsive noise suppression in color images based on the geodesic digital paths

    NASA Astrophysics Data System (ADS)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  1. Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.

    PubMed

    Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing

    2007-01-01

    This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.

  2. On Adding Structure to Unstructured Overlay Networks

    NASA Astrophysics Data System (ADS)

    Leitão, João; Carvalho, Nuno A.; Pereira, José; Oliveira, Rui; Rodrigues, Luís

    Unstructured peer-to-peer overlay networks are very resilient to churn and topology changes, while requiring little maintenance cost. Therefore, they are an infrastructure to build highly scalable large-scale services in dynamic networks. Typically, the overlay topology is defined by a peer sampling service that aims at maintaining, in each process, a random partial view of peers in the system. The resulting random unstructured topology is suboptimal when a specific performance metric is considered. On the other hand, structured approaches (for instance, a spanning tree) may optimize a given target performance metric but are highly fragile. In fact, the cost for maintaining structures with strong constraints may easily become prohibitive in highly dynamic networks. This chapter discusses different techniques that aim at combining the advantages of unstructured and structured networks. Namely we focus on two distinct approaches, one based on optimizing the overlay and another based on optimizing the gossip mechanism itself.

  3. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  4. Optical enhancement of a printed organic tandem solar cell using diffractive nanostructures.

    PubMed

    Mayer, Jan A; Offermans, Ton; Chrapa, Marek; Pfannmöller, Martin; Bals, Sara; Ferrini, Rolando; Nisato, Giovanni

    2018-03-19

    Solution processable organic tandem solar cells offer a promising approach to achieve cost-effective, lightweight and flexible photovoltaics. In order to further enhance the efficiency of optimized organic tandem cells, diffractive light-management nanostructures were designed for an optimal redistribution of the light as function of both wavelength and propagation angles in both sub-cells. As the fabrication of these optical structures is compatible with roll-to-roll production techniques such as hot-embossing or UV NIL imprinting, they present an optimal cost-effective solution for printed photovoltaics. Tandem cells with power conversion efficiencies of 8-10% were fabricated in the ambient atmosphere by doctor blade coating, selected to approximate the conditions during roll-to-roll manufacturing. Application of the light management structure onto an 8.7% efficient encapsulated tandem cell boosted the conversion efficiency of the cell to 9.5%.

  5. Recovery of polyhydroxyalkanoates from municipal secondary wastewater sludge.

    PubMed

    Kumar, Manish; Ghosh, Pooja; Khosla, Khushboo; Thakur, Indu Shekhar

    2018-05-01

    In the current study, the feasibility of utilizing municipal secondary wastewater sludge for Polyhydroxyalkanoate (PHA) extraction was improved by optimization of various parameters (temperature, duration and concentration of sludge solids). Optimized process parameters resulted in PHA recovery of 0.605 g, significantly higher than un-optimized conditions. The characterization of PHA was carried out by GC-MS, FT-IR and NMR ( 1 H and 13 C) spectroscopy. The PHA profile was found to be dominated by mcl PHA (58%) along with other diverse PHA. The results of the present study show rich diversity of PHA extracted from a raw material which is readily available at minimal cost. In conclusion, exploring the potential of wastes for production of bioplastics not only reduces the cost of bioplastic production, but also provides a sustainable means for waste management. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.

  7. Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)

    USDA-ARS?s Scientific Manuscript database

    Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...

  8. A Systems Engineering Framework for Implementing a Security and Critical Patch Management Process in Diverse Environments (Academic Departments' Workstations)

    NASA Astrophysics Data System (ADS)

    Mohammadi, Hadi

    Use of the Patch Vulnerability Management (PVM) process should be seriously considered for any networked computing system. The PVM process prevents the operating system (OS) and software applications from being attacked due to security vulnerabilities, which lead to system failures and critical data leakage. The purpose of this research is to create and design a Security and Critical Patch Management Process (SCPMP) framework based on Systems Engineering (SE) principles. This framework will assist Information Technology Department Staff (ITDS) to reduce IT operating time and costs and mitigate the risk of security and vulnerability attacks. Further, this study evaluates implementation of the SCPMP in the networked computing systems of an academic environment in order to: 1. Meet patch management requirements by applying SE principles. 2. Reduce the cost of IT operations and PVM cycles. 3. Improve the current PVM methodologies to prevent networked computing systems from becoming the targets of security vulnerability attacks. 4. Embed a Maintenance Optimization Tool (MOT) in the proposed framework. The MOT allows IT managers to make the most practicable choice of methods for deploying and installing released patches and vulnerability remediation. In recent years, there has been a variety of frameworks for security practices in every networked computing system to protect computer workstations from becoming compromised or vulnerable to security attacks, which can expose important information and critical data. I have developed a new mechanism for implementing PVM for maximizing security-vulnerability maintenance, protecting OS and software packages, and minimizing SCPMP cost. To increase computing system security in any diverse environment, particularly in academia, one must apply SCPMP. I propose an optimal maintenance policy that will allow ITDS to measure and estimate the variation of PVM cycles based on their department's requirements. My results demonstrate that MOT optimizes the process of implementing SCPMP in academic workstations.

  9. Novel Elastomeric Closed Cell Foam - Nonwoven Fabric Composite Material (Phase III)

    DTIC Science & Technology

    2008-10-01

    increasing the polymer content of the foam. From laboratory studies, processing was found to improve by using different types of NBR rubber . The AF07 B...Foam Optimization (Task 1) Prior development of fire retarded closed cell foam yielded attractive candidates for scale-up. Nitrile-butadiene rubber ... NBR ) and polyvinyl chloride (PVC) blends provided the most cost effective solutions. Two types of formulas were chosen for optimization. The first

  10. Optimizing conservation practices in watersheds: Do community preferences matter?

    NASA Astrophysics Data System (ADS)

    Piemonti, Adriana D.; Babbar-Sebens, Meghna; Jane Luzar, E.

    2013-10-01

    This paper focuses on investigating (a) how landowner tenure and attitudes of farming communities affect the preference of individual conservation practices in agricultural watersheds, (b) how spatial distribution of landowner tenure affects the spatial optimization of conservation practices on a watershed scale, and (c) how the different attitudes and preferences of stakeholders can modify the effectiveness of alternatives obtained via classic optimization approaches that do not include the influence of existing social attitudes in a watershed during the search process. Results show that for Eagle Creek Watershed in central Indiana, USA, the most optimal alternatives (i.e., highest benefits for minimum economic costs) are for a scenario when the watershed consists of landowners who operate as farmers on their own land. When a different land-tenure scenario was used for the watershed (e.g., share renters and cash renters), the optimized alternatives had similar nitrate reduction benefits and sediment reduction benefits, but at higher economic costs. Our experiments also demonstrated that social attitudes can lead to alteration of optimized alternatives found via typical optimization approaches. For example, when certain practices were rejected by landowner operators whose attitudes toward practices were driven by economic profits, removal of these practices from the optimized alternatives led to a setback of nitrates reduction by 2-50%, peak flow reductions by 11-98 %, and sediments reduction by 20-77%. In conclusion, this study reveals the potential loss in optimality of optimized alternatives possible, when socioeconomic data on farmer preferences and land tenure are not incorporated within watershed optimization investigations.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, J.E.; Weathers, P.J.; McConville, F.X.

    Apple pomace (the pulp residue from pressing apple juice) is an abundant waste product and presents an expensive disposal problem. A typical (50,000 gal. juice/day) apple juice company in central Massachusetts produces 100 tons of pomace per day. Some of it is used as pig feed, but it is poor quality feed because of its low protein content. Most of the pomace is hauled away (at a cost of $4/ton) and landfilled (at a cost of $10/ton). If 5% (w/w) conversion of pomace to ethanol could be achieved, the need for this company to purchase No. 6 fuel oil (1000more » gal/day) for cooking during processing would be eliminated. Our approach was to saccharify the pomace enzymatically, and then to carry out a yeast fermentation on the hydrolysate. We chose to use enzymatic hydrolysis instead of dilute acid hydrolysis in order to minimize pH control problems both in the fermentation phase and in the residue. The only chemical studies have concerned small subfractions of apple material: for example, cell walls have been analyzed but they constitute only 1 to 2% of the fresh weight of the apple (about 15 to 30% of the pomace fraction). Therefore, our major problems were: (1) to optimize hydrolysis by enzyme mixtures, using weight loss and ultimate ethanol production as optimization criteria; (2) to optimize ethanol production from the hydrolysate by judicious choice of yeast strains and fermentation conditions; and (3) achieve these optimizations consistent with minimum processing cost and energy input. We have obtained up to 5.1% (w/w) of ethanol without saccharification. We show here that hydrolysis with high levels of enzyme can enhance ethanol yield by up to 27%, to a maximum level of 6% (w/w); however, enzyme treament may be cost-effective only a low levels, for improvement of residue compaction. 3 figures, 4 tables.« less

  12. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    NASA Astrophysics Data System (ADS)

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.

  13. Supply chain optimization at an academic medical center.

    PubMed

    Labuhn, Jonathan; Almeter, Philip; McLaughlin, Christopher; Fields, Philip; Turner, Benjamin

    2017-08-01

    A successful supply chain optimization project that leveraged technology, engineering principles, and a technician workflow redesign in the setting of a growing health system is described. With continued rises in medication costs, medication inventory management is increasingly important. Proper management of central pharmacy inventory and floor-stock inventory in automated dispensing cabinets (ADCs) can be challenging. In an effort to improve control of inventory costs in the central pharmacy of a large academic medical center, the pharmacy department implemented a supply chain optimization project in collaboration with the medical center's inhouse team of experts on process improvement and industrial engineering. The project had 2 main components: (1) upgrading and reconfiguring carousel technology within an expanded central pharmacy footprint to generate accurate floor-stock inventory replenishment reports, which resulted in efficiencies within the medication-use system, and (2) implementing a technician workflow redesign and algorithm to right-size the ADC inventory, which decreased inventory stockouts (i.e., incidents of depletion of medication stock) and improved ADC user satisfaction. Through a multifaceted approach to inventory management, the number of stockouts per month was decreased and ADC inventory was optimized, resulting in a one-time inventory cost savings of $220,500. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  14. Energy neutral and low power wireless communications

    NASA Astrophysics Data System (ADS)

    Orhan, Oner

    Wireless sensor nodes are typically designed to have low cost and small size. These design objectives impose restrictions on the capacity and efficiency of the transceiver components and energy storage units that can be used. As a result, energy becomes a bottleneck and continuous operation of the sensor network requires frequent battery replacements, increasing the maintenance cost. Energy harvesting and energy efficient transceiver architectures are able to overcome these challenges by collecting energy from the environment and utilizing the energy in an intelligent manner. However, due to the nature of the ambient energy sources, the amount of useful energy that can be harvested is limited and unreliable. Consequently, optimal management of the harvested energy and design of low power transceivers pose new challenges for wireless network design and operation. The first part of this dissertation is on energy neutral wireless networking, where optimal transmission schemes under different system setups and objectives are investigated. First, throughput maximization for energy harvesting two-hop networks with decode-and-forward half-duplex relays is studied. For a system with two parallel relays, various combinations of the following four transmission modes are considered: Broadcast from the source, multi-access from the relays, and successive relaying phases I and II. Next, the energy cost of the processing circuitry as well as the transmission energy are taken into account for communication over a broadband fading channel powered by an energy harvesting transmitter. Under this setup, throughput maximization, energy maximization, and transmission completion time minimization problems are studied. Finally, source and channel coding for an energy-limited wireless sensor node is investigated under various energy constraints including energy harvesting, processing and sampling costs. For each objective, optimal transmission policies are formulated as the solutions of a convex optimization problem, and the properties of these optimal policies are identified. In the second part of this thesis, low power transceiver design is considered for millimeter wave communication systems. In particular, using an additive quantization noise model, the effect of analog-digital conversion (ADC) resolution and bandwidth on the achievable rate is investigated for a multi-antenna system under a receiver power constraint. Two receiver architectures, analog and digital combining, are compared in terms of performance.

  15. Material saving by means of CWR technology using optimization techniques

    NASA Astrophysics Data System (ADS)

    Pérez, Iñaki; Ambrosio, Cristina

    2017-10-01

    Material saving is currently a must for the forging companies, as material costs sum up to 50% for parts made of steel and up to 90% in other materials like titanium. For long products, cross wedge rolling (CWR) technology can be used to obtain forging preforms with a suitable distribution of the material along its own axis. However, defining the correct preform dimensions is not an easy task and it could need an intensive trial-and-error campaign. To speed up the preform definition, it is necessary to apply optimization techniques on Finite Element Models (FEM) able to reproduce the material behaviour when being rolled. Meta-models Assisted Evolution Strategies (MAES), that combine evolutionary algorithms with Kriging meta-models, are implemented in FORGE® software and they allow reducing optimization computation costs in a relevant way. The paper shows the application of these optimization techniques to the definition of the right preform for a shaft from a vehicle of the agricultural sector. First, the current forging process, based on obtaining the forging preform by means of an open die forging operation, is showed. Then, the CWR preform optimization is developed by using the above mentioned optimization techniques. The objective is to reduce, as much as possible, the initial billet weight, so that a calculation of flash weight reduction due to the use of the proposed preform is stated. Finally, a simulation of CWR process for the defined preform is carried out to check that most common failures (necking, spirals,..) in CWR do not appear in this case.

  16. Processing power limits social group size: computational evidence for the cognitive costs of sociality

    PubMed Central

    Dávid-Barrett, T.; Dunbar, R. I. M.

    2013-01-01

    Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623

  17. The costs of avoiding environmental impacts from shale-gas surface infrastructure.

    PubMed

    Milt, Austin W; Gagnolet, Tamara D; Armsworth, Paul R

    2016-12-01

    Growing energy demand has increased the need to manage conflicts between energy production and the environment. As an example, shale-gas extraction requires substantial surface infrastructure, which fragments habitats, erodes soils, degrades freshwater systems, and displaces rare species. Strategic planning of shale-gas infrastructure can reduce trade-offs between economic and environmental objectives, but the specific nature of these trade-offs is not known. We estimated the cost of avoiding impacts from land-use change on forests, wetlands, rare species, and streams from shale-energy development within leaseholds. We created software for optimally siting shale-gas surface infrastructure to minimize its environmental impacts at reasonable construction cost. We visually assessed sites before infrastructure optimization to test whether such inspection could be used to predict whether impacts could be avoided at the site. On average, up to 38% of aggregate environmental impacts of infrastructure could be avoided for 20% greater development costs by spatially optimizing infrastructure. However, we found trade-offs between environmental impacts and costs among sites. In visual inspections, we often distinguished between sites that could be developed to avoid impacts at relatively low cost (29%) and those that could not (20%). Reductions in a metric of aggregate environmental impact could be largely attributed to potential displacement of rare species, sedimentation, and forest fragmentation. Planners and regulators can estimate and use heterogeneous trade-offs among development sites to create industry-wide improvements in environmental performance and do so at reasonable costs by, for example, leveraging low-cost avoidance of impacts at some sites to offset others. This could require substantial effort, but the results and software we provide can facilitate the process. © 2016 Society for Conservation Biology.

  18. BMP analysis system for watershed-based stormwater management.

    PubMed

    Zhen, Jenny; Shoemaker, Leslie; Riverson, John; Alvi, Khalid; Cheng, Mow-Soung

    2006-01-01

    Best Management Practices (BMPs) are measures for mitigating nonpoint source (NPS) pollution caused mainly by stormwater runoff. Established urban and newly developing areas must develop cost effective means for restoring or minimizing impacts, and planning future growth. Prince George's County in Maryland, USA, a fast-growing region in the Washington, DC metropolitan area, has developed a number of tools to support analysis and decision making for stormwater management planning and design at the watershed level. These tools support watershed analysis, innovative BMPs, and optimization. Application of these tools can help achieve environmental goals and lead to significant cost savings. This project includes software development that utilizes GIS information and technology, integrates BMP processes simulation models, and applies system optimization techniques for BMP planning and selection. The system employs the ESRI ArcGIS as the platform, and provides GIS-based visualization and support for developing networks including sequences of land uses, BMPs, and stream reaches. The system also provides interfaces for BMP placement, BMP attribute data input, and decision optimization management. The system includes a stand-alone BMP simulation and evaluation module, which complements both research and regulatory nonpoint source control assessment efforts, and allows flexibility in the examining various BMP design alternatives. Process based simulation of BMPs provides a technique that is sensitive to local climate and rainfall patterns. The system incorporates a meta-heuristic optimization technique to find the most cost-effective BMP placement and implementation plan given a control target, or a fixed cost. A case study is presented to demonstrate the application of the Prince George's County system. The case study involves a highly urbanized area in the Anacostia River (a tributary to Potomac River) watershed southeast of Washington, DC. An innovative system of management practices is proposed to minimize runoff, improve water quality, and provide water reuse opportunities. Proposed management techniques include bioretention, green roof, and rooftop runoff collection (rain barrel) systems. The modeling system was used to identify the most cost-effective combinations of management practices to help minimize frequency and size of runoff events and resulting combined sewer overflows to the Anacostia River.

  19. Fuel ethanol production: process design trends and integration opportunities.

    PubMed

    Cardona, Carlos A; Sánchez, Oscar J

    2007-09-01

    Current fuel ethanol research and development deals with process engineering trends for improving biotechnological production of ethanol. In this work, the key role that process design plays during the development of cost-effective technologies is recognized through the analysis of major trends in process synthesis, modeling, simulation and optimization related to ethanol production. Main directions in techno-economical evaluation of fuel ethanol processes are described as well as some prospecting configurations. The most promising alternatives for compensating ethanol production costs by the generation of valuable co-products are analyzed. Opportunities for integration of fuel ethanol production processes and their implications are underlined. Main ways of process intensification through reaction-reaction, reaction-separation and separation-separation processes are analyzed in the case of bioethanol production. Some examples of energy integration during ethanol production are also highlighted. Finally, some concluding considerations on current and future research tendencies in fuel ethanol production regarding process design and integration are presented.

  20. Implementation of fuzzy logic to determining selling price of products in a local corporate chain store

    NASA Astrophysics Data System (ADS)

    Kristiana, S. P. D.

    2017-12-01

    Corporate chain store is one type of retail industries companies that are developing growing rapidly in Indonesia. The competition between retail companies is very tight, so retailer companies should evaluate its performance continuously in order to survive. The selling price of products is one of the essential attributes and gets attention of many consumers where it’s used to evaluate the performance of the industry. This research aimed to determine optimal selling price of product with considering cost factors, namely purchase price of the product from supplier, holding costs, and transportation costs. Fuzzy logic approach is used in data processing with MATLAB software. Fuzzy logic is selected to solve the problem because this method can consider complexities factors. The result is a model of determination of the optimal selling price by considering three cost factors as inputs in the model. Calculating MAPE and model prediction ability for some products are used as validation and verification where the average value is 0.0525 for MAPE and 94.75% for prediction ability. The conclusion is this model can predict the selling price of up to 94.75%, so it can be used as tools for the corporate chain store in particular to determine the optimal selling price for its products.

  1. Achieving Conservation when Opportunity Costs Are High: Optimizing Reserve Design in Alberta's Oil Sands Region

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Farr, Dan; Adamowicz, W. L.; Boutin, Stan

    2011-01-01

    Recent studies have shown that conservation gains can be achieved when the spatial distributions of biological benefits and economic costs are incorporated in the conservation planning process. Using Alberta, Canada, as a case study we apply these techniques in the context of coarse-filter reserve design. Because targets for ecosystem representation and other coarse-filter design elements are difficult to define objectively we use a trade-off analysis to systematically explore the relationship between conservation targets and economic opportunity costs. We use the Marxan conservation planning software to generate reserve designs at each level of conservation target to ensure that our quantification of conservation and economic outcomes represents the optimal allocation of resources in each case. Opportunity cost is most affected by the ecological representation target and this relationship is nonlinear. Although petroleum resources are present throughout most of Alberta, and include highly valuable oil sands deposits, our analysis indicates that over 30% of public lands could be protected while maintaining access to more than 97% of the value of the region's resources. Our case study demonstrates that optimal resource allocation can be usefully employed to support strategic decision making in the context of land-use planning, even when conservation targets are not well defined. PMID:21858046

  2. Understanding product cost vs. performance through an in-depth system Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Sanson, Mark C.

    2017-08-01

    The manner in which an optical system is toleranced and compensated greatly affects the cost to build it. By having a detailed understanding of different tolerance and compensation methods, the end user can decide on the balance of cost and performance. A detailed phased approach Monte Carlo analysis can be used to demonstrate the tradeoffs between cost and performance. In complex high performance optical systems, performance is fine-tuned by making adjustments to the optical systems after they are initially built. This process enables the overall best system performance, without the need for fabricating components to stringent tolerance levels that often can be outside of a fabricator's manufacturing capabilities. A good performance simulation of as built performance can interrogate different steps of the fabrication and build process. Such a simulation may aid the evaluation of whether the measured parameters are within the acceptable range of system performance at that stage of the build process. Finding errors before an optical system progresses further into the build process saves both time and money. Having the appropriate tolerances and compensation strategy tied to a specific performance level will optimize the overall product cost.

  3. Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup

    PubMed Central

    Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.

    2010-01-01

    Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651

  4. Minimizing Uncertainties Impact in Decision Making with an Applicability Study for Economic Power Dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Wang, Shaobu; Fan, Rui

    This report summaries the work performed under the LDRD project on the preliminary study on knowledge automation, where specific focus has been made on the investigation of the impact of uncertainties of human decision making onto the optimization of the process operation. At first the statistics on signals from the Brain-Computing Interface (BCI) is analyzed so as to obtain the uncertainties characterization of human operators during the decision making phase using the electroencephalogram (EEG) signals. This is then followed by the discussions of an architecture that reveals the equivalence between optimization and closed loop feedback control design, where it hasmore » been shown that all the optimization problems can be transferred into the control design problem for closed loop systems. This has led to a “closed loop” framework, where the structure of the decision making is shown to be subjected to both process disturbances and controller’s uncertainties. The latter can well represent the uncertainties or randomness occurred during human decision making phase. As a result, a stochastic optimization problem has been formulated and a novel solution has been proposed using probability density function (PDF) shaping for both the cost function and the constraints using stochastic distribution control concept. A sufficient condition has been derived that guarantees the convergence of the optimal solution and discussions have been made for both the total probabilistic solution and chanced constrained optimization which have been well-studied in optimal power flows (OPF) area. A simple case study has been carried out for the economic dispatch of powers for a grid system when there are distributed energy resources (DERs) in the system, and encouraging results have been obtained showing that a significant savings on the generation cost can be expected.« less

  5. Review of cost versus scale: water and wastewater treatment and reuse processes.

    PubMed

    Guo, Tianjiao; Englehardt, James; Wu, Tingting

    2014-01-01

    The US National Research Council recently recommended direct potable water reuse (DPR), or potable water reuse without environmental buffer, for consideration to address US water demand. However, conveyance of wastewater and water to and from centralized treatment plants consumes on average four times the energy of treatment in the USA, and centralized DPR would further require upgradient distribution of treated water. Therefore, information on the cost of unit treatment processes potentially useful for DPR versus system capacity was reviewed, converted to constant 2012 US dollars, and synthesized in this work. A logarithmic variant of the Williams Law cost function was found applicable over orders of magnitude of system capacity, for the subject processes: activated sludge, membrane bioreactor, coagulation/flocculation, reverse osmosis, ultrafiltration, peroxone and granular activated carbon. Results are demonstrated versus 10 DPR case studies. Because economies of scale found for capital equipment are counterbalanced by distribution/collection network costs, further study of the optimal scale of distributed DPR systems is suggested.

  6. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  7. Construction Performance Optimization toward Green Building Premium Cost Based on Greenship Rating Tools Assessment with Value Engineering Method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat

    2017-07-01

    Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.

  8. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    NASA Astrophysics Data System (ADS)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  9. The operations of quantum logic gates with pure and mixed initial states.

    PubMed

    Chen, Jun-Liang; Li, Che-Ming; Hwang, Chi-Chuan; Ho, Yi-Hui

    2011-04-07

    The implementations of quantum logic gates realized by the rovibrational states of a C(12)O(16) molecule in the X((1)Σ(+)) electronic ground state are investigated. Optimal laser fields are obtained by using the modified multitarget optimal theory (MTOCT) which combines the maxima of the cost functional and the fidelity for state and quantum process. The projection operator technique together with modified MTOCT is used to get optimal laser fields. If initial states of the quantum gate are pure states, states at target time approach well to ideal target states. However, if the initial states are mixed states, the target states do not approach well to ideal ones. The process fidelity is introduced to investigate the reliability of the quantum gate operation driven by the optimal laser field. We found that the quantum gates operate reliably whether the initial states are pure or mixed.

  10. Family System of Advanced Charring Ablators for Planetary Exploration Missions

    NASA Technical Reports Server (NTRS)

    Congdon, William M.; Curry, Donald M.

    2005-01-01

    Advanced Ablators Program Objectives: 1) Flight-ready(TRL-6) ablative heat shields for deep-space missions; 2) Diversity of selection from family-system approach; 3) Minimum weight systems with high reliability; 4) Optimized formulations and processing; 5) Fully characterized properties; and 6) Low-cost manufacturing. Definition and integration of candidate lightweight structures. Test and analysis database to support flight-vehicle engineering. Results from production scale-up studies and production-cost analyses.

  11. Cure Cycle Design Methodology for Fabricating Reactive Resin Matrix Fiber Reinforced Composites: A Protocol for Producing Void-free Quality Laminates

    NASA Technical Reports Server (NTRS)

    Hou, Tan-Hung

    2014-01-01

    For the fabrication of resin matrix fiber reinforced composite laminates, a workable cure cycle (i.e., temperature and pressure profiles as a function of processing time) is needed and is critical for achieving void-free laminate consolidation. Design of such a cure cycle is not trivial, especially when dealing with reactive matrix resins. An empirical "trial and error" approach has been used as common practice in the composite industry. Such an approach is not only costly, but also ineffective at establishing the optimal processing conditions for a specific resin/fiber composite system. In this report, a rational "processing science" based approach is established, and a universal cure cycle design protocol is proposed. Following this protocol, a workable and optimal cure cycle can be readily and rationally designed for most reactive resin systems in a cost effective way. This design protocol has been validated through experimental studies of several reactive polyimide composites for a wide spectrum of usage that has been documented in the previous publications.

  12. INTEGRATION OF COST MODELS AND PROCESS SIMULATION TOOLS FOR OPTIMUM COMPOSITE MANUFACTURING PROCESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pack, Seongchan; Wilson, Daniel; Aitharaju, Venkat

    Manufacturing cost of resin transfer molded composite parts is significantly influenced by the cycle time, which is strongly related to the time for both filling and curing of the resin in the mold. The time for filling can be optimized by various injection strategies, and by suitably reducing the length of the resin flow distance during the injection. The curing time can be reduced by the usage of faster curing resins, but it requires a high pressure injection equipment, which is capital intensive. Predictive manufacturing simulation tools that are being developed recently for composite materials are able to provide variousmore » scenarios of processing conditions virtually well in advance of manufacturing the parts. In the present study, we integrate the cost models with process simulation tools to study the influence of various parameters such as injection strategies, injection pressure, compression control to minimize high pressure injection, resin curing rate, and demold time on the manufacturing cost as affected by the annual part volume. A representative automotive component was selected for the study and the results are presented in this paper« less

  13. Optimization of a low-cost defined medium for alcoholic fermentation--a case study for potential application in bioethanol production from industrial wastewaters.

    PubMed

    Comelli, Raúl N; Seluy, Lisandro G; Isla, Miguel A

    2016-01-25

    In bioethanol production processes, the media composition has an impact on product concentration, yields and the overall process economics. The main purpose of this research was to develop a low-cost mineral-based supplement for successful alcoholic fermentation in an attempt to provide an economically feasible alternative to produce bioethanol from novel sources, for example, sugary industrial wastewaters. Statistical experimental designs were used to select essential nutrients for yeast fermentation, and its optimal concentrations were estimated by Response Surface Methodology. Fermentations were performed on synthetic media inoculated with 2.0 g L(-1) of yeast, and the evolution of biomass, sugar, ethanol, CO2 and glycerol were monitored over time. A mix of salts [10.6 g L(-1) (NH4)2HPO4; 6.4 g L(-1) MgSO4·7H2O and 7.5 mg L(-1) ZnSO4·7H2O] was found to be optimal. It led to the complete fermentation of the sugars in less than 12h with an average ethanol yield of 0.42 g ethanol/g sugar. A general C-balance indicated that no carbonaceous compounds different from biomass, ethanol, CO2 or glycerol were produced in significant amounts in the fermentation process. Similar results were obtained when soft drink wastewaters were tested to evaluate the potential industrial application of this supplement. The ethanol yields were very close to those obtained when yeast extract was used as the supplement, but the optimized mineral-based medium is six times cheaper, which favorably impacts the process economics and makes this supplement more attractive from an industrial viewpoint. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. [Reorganization of the interdisciplinary emergency unit at the university clinic of Göttingen].

    PubMed

    Blaschke, Sabine; Müller, Gerhard A; Bergmann, Günther

    2008-04-01

    Configuration of the interdisciplinary emergency unit within the university clinic of Göttingen was successfully reorganized during the past two years. All emergencies except traumatologic, gynecologic and pediatric emergencies are treated within this functional unit which is guided by the center of internal medicine. It is organized in a three shift operation manner over a period of 24 hours. Due to a close interdisciplinary collaboration between different departments patients receive optimal diagnostic and therapeutic treatment within a short period of time. To improve processes within the emergency department a series of measures were taken including the -establishment of an intermediate care unit for unstable patients, setting up of special diagnostic and therapeutic units for the acute coronary syndrome as well as stroke, implementation of standardized clinical pathways, establishment of an electronic data processing network in close communication with all diagnostic entities, introduction of a quality assurance system and reduction of medical costs. Reorganization measures lead to a substantial optimization and acceleration of emergency proceedings and thus, provides optimal patient care around the clock. In addition, medical costs could clearly be reduced at the interface between preclinical and clinical emergency medicine.

  15. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites.

    PubMed

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-03-13

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59-60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties.

  16. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites

    PubMed Central

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-01-01

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59–60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties. PMID:29534048

  17. Automated assembly of VECSEL components

    NASA Astrophysics Data System (ADS)

    Brecher, C.; Pyschny, N.; Haag, S.; Mueller, T.

    2013-02-01

    Due to the architectural advantage of an external cavity architecture that enables the integration of additional elements into the cavity (e.g. for mode control, frequency conversion, wavelength tuning or passive mode-locking) VECSELs are a rapidly developing laser technology. Nevertheless they often have to compete with direct (edge) emitting laser diodes which can have significant cost advantages thanks to their rather simple structure and production processes. One way to compensate the economical disadvantages of VECSELs is to optimize each component in terms of quality and costs and to apply more efficient (batch) production processes. In this context, the paper presents recent process developments for the automated assembly of VECSELs using a new type of desktop assembly station with an ultra-precise micromanipulator. The core concept is to create a dedicated process development environment from which implemented processes can be transferred fluently to production equipment. By now two types of processes have been put into operation on the desktop assembly station: 1.) passive alignment of the pump optics implementing a camera-based alignment process, where the pump spot geometry and position on the semiconductor chip is analyzed and evaluated; 2.) active alignment of the end mirror based on output power measurements and optimization algorithms. In addition to the core concept and corresponding hardware and software developments, detailed results of both processes are presented explaining measurement setups as well as alignment strategies and results.

  18. Optimizing biomass feedstock logistics for forest residue processing and transportation on a tree-shaped road network

    Treesearch

    Hee Han; Woodam Chung; Lucas Wells; Nathaniel Anderson

    2018-01-01

    An important task in forest residue recovery operations is to select the most cost-efficient feedstock logistics system for a given distribution of residue piles, road access, and available machinery. Notable considerations include inaccessibility of treatment units to large chip vans and frequent, long-distance mobilization of forestry equipment required to process...

  19. High Productivity DRIE solutions for 3D-SiP and MEMS Volume Manufacturing

    NASA Astrophysics Data System (ADS)

    Puech, M.; Thevenoud, JM; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, JM

    2006-04-01

    Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimised to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters has resulted in ultra high silicon etch rates, with unrivalled uniformity and repeatability leading to excellent process. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer heads and Silicon microphones.

  20. SU-E-T-270: Optimized Shielding Calculations for Medical Linear Accelerators (LINACs).

    PubMed

    Muhammad, W; Lee, S; Hussain, A

    2012-06-01

    The purpose of radiation shielding is to reduce the effective equivalent dose from a medical linear accelerator (LINAC) to a point outside the room to a level determined by individual state/international regulations. The study was performed to design LINAC's room for newly planned radiotherapy centers. Optimized shielding calculations were performed for LINACs having maximum photon energy of 20 MV based on NCRP 151. The maximum permissible dose limits were kept 0.04 mSv/week and 0.002 mSv/week for controlled and uncontrolled areas respectively by following ALARA principle. The planned LINAC's room was compared to the already constructed (non-optimized) LINAC's room to evaluate the shielding costs and the other facilities those are directly related to the room design. In the evaluation process it was noted that the non-optimized room size (i.e., 610 × 610 cm 2 or 20 feet × 20 feet) is not suitable for total body irradiation (TBI) although the machine installed inside was having not only the facility of TBI but the license was acquired. By keeping this point in view, the optimized INAC's room size was kept 762 × 762 cm 2. Although, the area of the optimized rooms was greater than the non-planned room (i.e., 762 × 762 cm 2 instead of 610 × 610 cm 2), the shielding cost for the optimized LINAC's rooms was reduced by 15%. When optimized shielding calculations were re-performed for non-optimized shielding room (i.e., keeping room size, occupancy factors, workload etc. same), it was found that the shielding cost may be lower to 41 %. In conclusion, non- optimized LINAC's room can not only put extra financial burden on the hospital but also can cause of some serious issues related to providing health care facilities for patients. © 2012 American Association of Physicists in Medicine.

  1. Vitre-graf Coating on Mullite. Low Cost Silicon Array Project: Large Area Sillicon Sheet Task

    NASA Technical Reports Server (NTRS)

    Rossi, R. C.

    1979-01-01

    The processing parameters of the Vitre-Graf coating for optimal performance and economy when applied to mullite and graphite as substrates were presented. A minor effort was also performed on slip-cast fused silica substractes.

  2. Designing cost effective water demand management programs in Australia.

    PubMed

    White, S B; Fane, S A

    2002-01-01

    This paper describes recent experience with integrated resource planning (IRP) and the application of least cost planning (LCP) for the evaluation of demand management strategies in urban water. Two Australian case studies, Sydney and Northern New South Wales (NSW) are used in illustration. LCP can determine the most cost effective means of providing water services or alternatively the cheapest forms of water conservation. LCP contrasts to a traditional approach of evaluation which looks only at means of increasing supply. Detailed investigation of water usage, known as end-use analysis, is required for LCP. End-use analysis allows both rigorous demand forecasting, and the development and evaluation of conservation strategies. Strategies include education campaigns, increasing water use efficiency and promoting wastewater reuse or rainwater tanks. The optimal mix of conservation strategies and conventional capacity expansion is identified based on levelised unit cost. IRP uses LCP in the iterative process, evaluating and assessing options, investing in selected options, measuring the results, and then re-evaluating options. Key to this process is the design of cost effective demand management programs. IRP however includes a range of parameters beyond least economic cost in the planning process and program designs, including uncertainty, benefit partitioning and implementation considerations.

  3. Design search and optimization in aerospace engineering.

    PubMed

    Keane, A J; Scanlan, J P

    2007-10-15

    In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.

  4. The Effects of Operational Parameters on a Mono-wire Cutting System: Efficiency in Marble Processing

    NASA Astrophysics Data System (ADS)

    Yilmazkaya, Emre; Ozcelik, Yilmaz

    2016-02-01

    Mono-wire block cutting machines that cut with a diamond wire can be used for squaring natural stone blocks and the slab-cutting process. The efficient use of these machines reduces operating costs by ensuring less diamond wire wear and longer wire life at high speeds. The high investment costs of these machines will lead to their efficient use and reduce production costs by increasing plant efficiency. Therefore, there is a need to investigate the cutting performance parameters of mono-wire cutting machines in terms of rock properties and operating parameters. This study aims to investigate the effects of the wire rotational speed (peripheral speed) and wire descending speed (cutting speed), which are the operating parameters of a mono-wire cutting machine, on unit wear and unit energy, which are the performance parameters in mono-wire cutting. By using the obtained results, cuttability charts for each natural stone were created on the basis of unit wear and unit energy values, cutting optimizations were performed, and the relationships between some physical and mechanical properties of rocks and the optimum cutting parameters obtained as a result of the optimization were investigated.

  5. An effective and comprehensive model for optimal rehabilitation of separate sanitary sewer systems.

    PubMed

    Diogo, António Freire; Barros, Luís Tiago; Santos, Joana; Temido, Jorge Santos

    2018-01-15

    In the field of rehabilitation of separate sanitary sewer systems, a large number of technical, environmental, and economic aspects are often relevant in the decision-making process, which may be modelled as a multi-objective optimization problem. Examples are those related with the operation and assessment of networks, optimization of structural, hydraulic, sanitary, and environmental performance, rehabilitation programmes, and execution works. In particular, the cost of investment, operation and maintenance needed to reduce or eliminate Infiltration from the underground water table and Inflows of storm water surface runoff (I/I) using rehabilitation techniques or related methods can be significantly lower than the cost of transporting and treating these flows throughout the lifespan of the systems or period studied. This paper presents a comprehensive I/I cost-benefit approach for rehabilitation that explicitly considers all elements of the systems and shows how the approximation is incorporated as an objective function in a general evolutionary multi-objective optimization model. It takes into account network performance and wastewater treatment costs, average values of several input variables, and rates that can reflect the adoption of different predictable or limiting scenarios. The approach can be used as a practical and fast tool to support decision-making in sewer network rehabilitation in any phase of a project. The fundamental aspects, modelling, implementation details and preliminary results of a two-objective optimization rehabilitation model using a genetic algorithm, with a second objective function related to the structural condition of the network and the service failure risk, are presented. The basic approach is applied to three real world cases studies of sanitary sewerage systems in Coimbra and the results show the simplicity, suitability, effectiveness, and usefulness of the approximation implemented and of the objective function proposed. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Identifying optimal agricultural countermeasure strategies for a hypothetical contamination scenario using the strategy model.

    PubMed

    Cox, G; Beresford, N A; Alvarez-Farizo, B; Oughton, D; Kis, Z; Eged, K; Thørring, H; Hunt, J; Wright, S; Barnett, C L; Gil, J M; Howard, B J; Crout, N M J

    2005-01-01

    A spatially implemented model designed to assist the identification of optimal countermeasure strategies for radioactively contaminated regions is described. Collective and individual ingestion doses for people within the affected area are estimated together with collective exported ingestion dose. A range of countermeasures are incorporated within the model, and environmental restrictions have been included as appropriate. The model evaluates the effectiveness of a given combination of countermeasures through a cost function which balances the benefit obtained through the reduction in dose with the cost of implementation. The optimal countermeasure strategy is the combination of individual countermeasures (and when and where they are implemented) which gives the lowest value of the cost function. The model outputs should not be considered as definitive solutions, rather as interactive inputs to the decision making process. As a demonstration the model has been applied to a hypothetical scenario in Cumbria (UK). This scenario considered a published nuclear power plant accident scenario with a total deposition of 1.7x10(14), 1.2x10(13), 2.8x10(10) and 5.3x10(9)Bq for Cs-137, Sr-90, Pu-239/240 and Am-241, respectively. The model predicts that if no remediation measures were implemented the resulting collective dose would be approximately 36 000 person-Sv (predominantly from 137Cs) over a 10-year period post-deposition. The optimal countermeasure strategy is predicted to avert approximately 33 000 person-Sv at a cost of approximately 160 million pounds. The optimal strategy comprises a mixture of ploughing, AFCF (ammonium-ferric hexacyano-ferrate) administration, potassium fertiliser application, clean feeding of livestock and food restrictions. The model recommends specific areas within the contaminated area and time periods where these measures should be implemented.

  7. Process development for the mass production of Ehrlichia ruminantium.

    PubMed

    Marcelino, Isabel; Sousa, Marcos F Q; Veríssimo, Célia; Cunha, António E; Carrondo, Manuel J T; Alves, Paula M

    2006-03-06

    This work describes the optimization of a cost-effective process for the production of an inactivated bacterial vaccine against heartwater and the first attempt to produce the causative agent of this disease, the rickettsia Ehrlichia ruminantium (ER), using stirred tanks. In vitro, it is possible to produce ER using cultures of ruminant endothelial cells. Herein, mass production of these cells was optimized for stirring conditions. The effect of inoculum size, microcarrier type, concentration of serum at inoculation time and agitation rate upon maximum cell concentration were evaluated. Several strategies for the scale-up of cell inoculum were also tested. Afterwards, using the optimized parameters for cell growth, ER production in stirred tanks was validated for two ER strains (Gardel and Welgevonden). Critical parameters related with the infection strategy such as serum concentration at infection time, multiplicity and time of infection, and medium refeed strategy were analyzed. The results indicate that it is possible to produce ER in stirred tank bioreactors, under serum-free culture conditions, reaching a 6.5-fold increase in ER production yields. The suitability of this process was validated up to a 2-l scale and a preliminary cost estimation has shown that the stirred tanks are the least expensive culture method. Overall, these results are crucial to define a scaleable and fully controlled process for the production of a heartwater vaccine and open "new avenues" for the production of vaccines against other ehrlichial species, with emerging impact in human and animal health.

  8. Multiple-objective optimization in precision laser cutting of different thermoplastics

    NASA Astrophysics Data System (ADS)

    Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.

    2015-04-01

    Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.

  9. Application of genetic algorithm in integrated setup planning and operation sequencing

    NASA Astrophysics Data System (ADS)

    Kafashi, Sajad; Shakeri, Mohsen

    2011-01-01

    Process planning is an essential component for linking design and manufacturing process. Setup planning and operation sequencing is two main tasks in process planning. Many researches solved these two problems separately. Considering the fact that the two functions are complementary, it is necessary to integrate them more tightly so that performance of a manufacturing system can be improved economically and competitively. This paper present a generative system and genetic algorithm (GA) approach to process plan the given part. The proposed approach and optimization methodology analyses the TAD (tool approach direction), tolerance relation between features and feature precedence relations to generate all possible setups and operations using workshop resource database. Based on these technological constraints the GA algorithm approach, which adopts the feature-based representation, optimizes the setup plan and sequence of operations using cost indices. Case study show that the developed system can generate satisfactory results in optimizing the setup planning and operation sequencing simultaneously in feasible condition.

  10. Extending rule-based methods to model molecular geometry and 3D model resolution.

    PubMed

    Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia

    2016-08-01

    Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.

  11. The latest developments and outlook for hydrogen liquefaction technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohlig, K.; Decker, L.

    2014-01-29

    Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence highermore » operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.« less

  12. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    NASA Astrophysics Data System (ADS)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  13. Solar High Temperature Water-Splitting Cycle with Quantum Boost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Robin; Davenport, Roger; Talbot, Jan

    A sulfur family chemical cycle having ammonia as the working fluid and reagent was developed as a cost-effective and efficient hydrogen production technology based on a solar thermochemical water-splitting cycle. The sulfur ammonia (SA) cycle is a renewable and sustainable process that is unique in that it is an all-fluid cycle (i.e., with no solids handling). It uses a moderate temperature solar plant with the solar receiver operating at 800°C. All electricity needed is generated internally from recovered heat. The plant would operate continuously with low cost storage and it is a good potential solar thermochemical hydrogen production cycle formore » reaching the DOE cost goals. Two approaches were considered for the hydrogen production step of the SA cycle: (1) photocatalytic, and (2) electrolytic oxidation of ammonium sulfite to ammonium sulfate in aqueous solutions. Also, two sub-cycles were evaluated for the oxygen evolution side of the SA cycle: (1) zinc sulfate/zinc oxide, and (2) potassium sulfate/potassium pyrosulfate. The laboratory testing and optimization of all the process steps for each version of the SA cycle were proven in the laboratory or have been fully demonstrated by others, but further optimization is still possible and needed. The solar configuration evolved to a 50 MW(thermal) central receiver system with a North heliostat field, a cavity receiver, and NaCl molten salt storage to allow continuous operation. The H2A economic model was used to optimize and trade-off SA cycle configurations. Parametric studies of chemical plant performance have indicated process efficiencies of ~20%. Although the current process efficiency is technically acceptable, an increased efficiency is needed if the DOE cost targets are to be reached. There are two interrelated areas in which there is the potential for significant efficiency improvements: electrolysis cell voltage and excessive water vaporization. Methods to significantly reduce water evaporation are proposed for future activities. Electrolysis membranes that permit higher temperatures and lower voltages are attainable. The oxygen half cycle will need further development and improvement.« less

  14. Method of Optimizing the Construction of Machining, Assembly and Control Devices

    NASA Astrophysics Data System (ADS)

    Iordache, D. M.; Costea, A.; Niţu, E. L.; Rizea, A. D.; Babă, A.

    2017-10-01

    Industry dynamics, driven by economic and social requirements, must generate more interest in technological optimization, capable of ensuring a steady development of advanced technical means to equip machining processes. For these reasons, the development of tools, devices, work equipment and control, as well as the modernization of machine tools, is the certain solution to modernize production systems that require considerable time and effort. This type of approach is also related to our theoretical, experimental and industrial applications of recent years, presented in this paper, which have as main objectives the elaboration and use of mathematical models, new calculation methods, optimization algorithms, new processing and control methods, as well as some structures for the construction and configuration of technological equipment with a high level of performance and substantially reduced costs..

  15. Development of an industrializable fermentation process for propionic acid production.

    PubMed

    Stowers, Chris C; Cox, Brad M; Rodriguez, Brandon A

    2014-05-01

    Propionic acid (PA) is a short-chain fatty acid with wide industrial application including uses in pharmaceuticals, herbicides, cosmetics, and food preservatives. As a three-carbon building block, PA also has potential as a precursor for high-volume commodity chemicals such as propylene. Currently, most PA is manufactured through petrochemical routes, which can be tied to increasing prices and volatility due to difficulty in demand forecasting and feedstock availability. Herein described are research advancements to develop an industrially feasible, renewable route to PA. Seventeen Propionibacterium strains were screened using glucose and sucrose as the carbon source to identify the best platform strain. Propionibacterium acidipropionici ATCC 4875 was selected as the platform strain and subsequent fermentation optimization studies were performed to maximize productivity and yield. Fermentation productivity was improved three-fold to exceed 2 g/l/h by densifying the inoculum source. Byproduct levels, particularly lactic and succinic acid, were reduced by optimizing fermentor headspace pressure and shear. Following achievement of commercially viable productivities, the lab-grade medium components were replaced with industrial counterparts to further reduce fermentation costs. A pure enzymatically treated corn mash (ECM) medium improved the apparent PA yield to 0.6 g/g (PA produced/glucose consumed), but it came at the cost of reduced productivity. Supplementation of ECM with cyanocobalamin restored productivity to near lab-grade media levels. The optimized ECM recipe achieved a productivity of 0.5 g/l/h with an apparent PA yield of 0.60 g/g corresponding to a media cost <1 USD/kg of PA. These improvements significantly narrow the gap between the fermentation and incumbent petrochemical processes, which is estimated to have a manufacturing cost of 0.82 USD/kg in 2017.

  16. Elaboration d'une structure de collecte des matieres residuelles selon la Theorie Constructale

    NASA Astrophysics Data System (ADS)

    Al-Maalouf, George

    Currently, more than 80% of the waste management costs are attributed to the waste collection phase. In order to reduce these costs, one current solution resides in the implementation of waste transfer stations. In these stations, at least 3 collection vehicles transfer their load into a larger hauling truck. This cost reduction is based on the principle of economy of scale applied to the transportation sector. This solution improves the efficiency of the system; nevertheless, it does not optimize it. Recent studies show that the compactor trucks used in the collection phase generate significant economic losses mainly due to the frequent stops and the transportation to transfer stations often far from the collection area. This study suggests the restructuring of the waste collection process by dividing it into two phases: the collection phase, and the transportation to the transfer station phase. To achieve this, a deterministic theory called: "the Constructal Theory" (CT) is used. The results show that starting a certain density threshold, the application of the CT minimizes energy losses in the system. In fact, the collection is optimal if it is done using a combination of low capacity vehicle to collect door to door and transfer their charge into high-capacity trucks. These trucks will then transport their load to the transfer station. To minimize the costs of labor, this study proposes the use of Cybernetic Transport System (CTS) as an automated collection vehicle to collect small amounts of waste. Finally, the optimization method proposed is part of a decentralized approach to the collection and treatment of waste. This allows the implementation of multi-process waste treatment facilities on a territory scale. Keywords: Waste collection, Constructal Theory, Cybernetic Transportation Systems.

  17. Reciprocating and Screw Compressor semi-empirical models for establishing minimum energy performance standards

    NASA Astrophysics Data System (ADS)

    Javed, Hassan; Armstrong, Peter

    2015-08-01

    The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.

  18. Mobile telephony through LEO satellites: To OBP or not

    NASA Technical Reports Server (NTRS)

    Monte, Paul A.; Louie, Ming; Wiedeman, R.

    1991-01-01

    GLOBALSTAR is a satellite-based mobile communications system that is interoperable with the current and future Public Land Mobile Network (PLMN) and Public Switched Telephone Network (PSTN). The selection of the transponder type, bent-pipe, or onboard processing (OBP), for GLOBALSTAR is based on many criteria, each of which is essential to the commercial and technological feasibility of GLOBALSTAR. The trade study that was done to determine the pros and cons of a bent-pipe transponder or an onboard processing transponder is described. The design of GLOBALSTAR's telecommunications system is a multi-variable cost optimization between the cost and complexity of individual satellites, the number of satellites required to provide coverage to the service areas, the cost of launching the satellites into their selected orbits, the ground segment cost, user equipment cost, satellite voice channel capacity, and other issues. Emphasis is on the cost and complexity of the individual satellites, specifically the transponder type and the impact of the transponder type on satellite and ground segment cost, satellite power and weight, and satellite voice channel capacity.

  19. Mobile telephony through LEO satellites: To OBP or not

    NASA Astrophysics Data System (ADS)

    Monte, Paul A.; Louie, Ming; Wiedeman, R.

    1991-11-01

    GLOBALSTAR is a satellite-based mobile communications system that is interoperable with the current and future Public Land Mobile Network (PLMN) and Public Switched Telephone Network (PSTN). The selection of the transponder type, bent-pipe, or onboard processing (OBP), for GLOBALSTAR is based on many criteria, each of which is essential to the commercial and technological feasibility of GLOBALSTAR. The trade study that was done to determine the pros and cons of a bent-pipe transponder or an onboard processing transponder is described. The design of GLOBALSTAR's telecommunications system is a multi-variable cost optimization between the cost and complexity of individual satellites, the number of satellites required to provide coverage to the service areas, the cost of launching the satellites into their selected orbits, the ground segment cost, user equipment cost, satellite voice channel capacity, and other issues. Emphasis is on the cost and complexity of the individual satellites, specifically the transponder type and the impact of the transponder type on satellite and ground segment cost, satellite power and weight, and satellite voice channel capacity.

  20. Data on cost-optimal Nearly Zero Energy Buildings (NZEBs) across Europe.

    PubMed

    D'Agostino, Delia; Parker, Danny

    2018-04-01

    This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs) in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO 2 emissions, envelope, materials, lighting, appliances and systems.

  1. Modeling of solar polygeneration plant

    NASA Astrophysics Data System (ADS)

    Leiva, Roberto; Escobar, Rodrigo; Cardemil, José

    2017-06-01

    In this work, a exergoeconomic analysis of the joint production of electricity, fresh water, cooling and process heat for a simulated concentrated solar power (CSP) based on parabolic trough collector (PTC) with thermal energy storage (TES) and backup energy system (BS), a multi-effect distillation (MED) module, a refrigeration absorption module, and process heat module is carried out. Polygeneration plant is simulated in northern Chile in Crucero with a yearly total DNI of 3,389 kWh/m2/year. The methodology includes designing and modeling a polygeneration plant and applying exergoeconomic evaluations and calculating levelized cost. Solar polygeneration plant is simulated hourly, in a typical meteorological year, for different solar multiple and hour of storage. This study reveals that the total exergy cost rate of products (sum of exergy cost rate of electricity, water, cooling and heat process) is an alternative method to optimize a solar polygeneration plant.

  2. Mode selective generation of guided waves by systematic optimization of the interfacial shear stress profile

    NASA Astrophysics Data System (ADS)

    Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice

    2015-01-01

    Piezoelectric transducers are commonly used in structural health monitoring systems to generate and measure ultrasonic guided waves (GWs) by applying interfacial shear and normal stresses to the host structure. In most cases, in order to perform damage detection, advanced signal processing techniques are required, since a minimum of two dispersive modes are propagating in the host structure. In this paper, a systematic approach for mode selection is proposed by optimizing the interfacial shear stress profile applied to the host structure, representing the first step of a global optimization of selective mode actuator design. This approach has the potential of reducing the complexity of signal processing tools as the number of propagating modes could be reduced. Using the superposition principle, an analytical method is first developed for GWs excitation by a finite number of uniform segments, each contributing with a given elementary shear stress profile. Based on this, cost functions are defined in order to minimize the undesired modes and amplify the selected mode and the optimization problem is solved with a parallel genetic algorithm optimization framework. Advantages of this method over more conventional transducers tuning approaches are that (1) the shear stress can be explicitly optimized to both excite one mode and suppress other undesired modes, (2) the size of the excitation area is not constrained and mode-selective excitation is still possible even if excitation width is smaller than all excited wavelengths, and (3) the selectivity is increased and the bandwidth extended. The complexity of the optimal shear stress profile obtained is shown considering two cost functions with various optimal excitation widths and number of segments. Results illustrate that the desired mode (A0 or S0) can be excited dominantly over other modes up to a wave power ratio of 1010 using an optimal shear stress profile.

  3. Prediction of Microstructure in High-Strength Ductile Forging Parts

    NASA Astrophysics Data System (ADS)

    Urban, M.; Keul, C.; Back, A.; Bleck, W.; Hirt, G.

    2010-06-01

    Governmental, environmental and economic demands call for lighter, stiffer and at the same time cheaper products in the vehicle industry. Especially safety relevant parts have to be stiff and at the same time ductile. The strategy of this project was to improve the mechanical properties of forging steel alloys by employing a high-strength and ductile bainitic microstructure in the parts while maintaining cost effective process chains to reach these goals for high stressed forged parts. Therefore, a new steel alloy combined with an optimized process chain has been developed. To optimize the process chain with a minimum of expensive experiments, a numerical approach was developed to predict the microstructure of the steel alloy after the process chain based on FEM simulations of the forging and cooling combined with deformation-time-temperature-transformation-diagrams.

  4. Topography-based Flood Planning and Optimization Capability Development Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R.; Tasseff, Byron A.; Bent, Russell W.

    2014-02-26

    Globally, water-related disasters are among the most frequent and costly natural hazards. Flooding inflicts catastrophic damage on critical infrastructure and population, resulting in substantial economic and social costs. NISAC is developing LeveeSim, a suite of nonlinear and network optimization models, to predict optimal barrier placement to protect critical regions and infrastructure during flood events. LeveeSim currently includes a high-performance flood model to simulate overland flow, as well as a network optimization model to predict optimal barrier placement during a flood event. The LeveeSim suite models the effects of flooding in predefined regions. By manipulating a domain’s underlying topography, developers alteredmore » flood propagation to reduce detrimental effects in areas of interest. This numerical altering of a domain’s topography is analogous to building levees, placing sandbags, etc. To induce optimal changes in topography, NISAC used a novel application of an optimization algorithm to minimize flooding effects in regions of interest. To develop LeveeSim, NISAC constructed and coupled hydrodynamic and optimization algorithms. NISAC first implemented its existing flood modeling software to use massively parallel graphics processing units (GPUs), which allowed for the simulation of larger domains and longer timescales. NISAC then implemented a network optimization model to predict optimal barrier placement based on output from flood simulations. As proof of concept, NISAC developed five simple test scenarios, and optimized topographic solutions were compared with intuitive solutions. Finally, as an early validation example, barrier placement was optimized to protect an arbitrary region in a simulation of the historic Taum Sauk dam breach.« less

  5. Development of optimized, graded-permeability axial groove heat pipes

    NASA Technical Reports Server (NTRS)

    Kapolnek, Michael R.; Holmes, H. Rolland

    1988-01-01

    Heat pipe performance can usually be improved by uniformly varying or grading wick permeability from end to end. A unique and cost effective method for grading the permeability of an axial groove heat pipe is described - selective chemical etching of the pipe casing. This method was developed and demonstrated on a proof-of-concept test article. The process improved the test article's performance by 50 percent. Further improvement is possible through the use of optimally etched grooves.

  6. Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.

    PubMed

    Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir

    2018-04-01

    In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.

  7. Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems

    NASA Astrophysics Data System (ADS)

    Kreuder, John J.

    Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.

  8. Continuous roll-to-roll growth of graphene films by chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Hesjedal, Thorsten

    2011-03-01

    Few-layer graphene is obtained in atmospheric chemical vapor deposition on polycrystalline copper in a roll-to-roll process. Raman and x-ray photoelectron spectroscopy were employed to confirm the few-layer nature of the graphene film, to map the inhomogeneities, and to study and optimize the growth process. This continuous growth process can be easily scaled up and enables the low-cost fabrication of graphene films for industrial applications.

  9. A Systematic Approach to Optimize Organizations Operating in Uncertain Environments: Design Methodology and Applications

    DTIC Science & Technology

    2002-09-01

    sub-goal can lead to achieving different goals (e.g., automation of on-line order processing may lead to both reducing the storage cost and reducing...equipment Introduce new technology Find cheaper supplier Sign a contract Introduce cheaper materials Set up and automate on-line order processing Integrate... order processing with inventory and shipping Set up company’s website Freight consolidation Just-in-time versus pre-planned balance

  10. Risk of infection due to medical interventions via central venous catheters or implantable venous access port systems at the middle port of a three-way cock: luer lock cap vs. luer access split septum system (Q-Syte).

    PubMed

    Pohl, Fabian; Hartmann, Werner; Holzmann, Thomas; Gensicke, Sandra; Kölbl, Oliver; Hautmann, Matthias G

    2014-01-25

    Many cancer patients receive a central venous catheter or port system prior to therapy to assure correct drug administration. Even appropriate hygienic intervention maintenance carries the risk of contaminating the middle port (C-port) of a three-way cock (TWC), a risk that increases with the number of medical interventions. Because of the complexity of the cleaning procedure with disconnection and reconnection of the standard luer lock cap (referred as "intervention"), we compared luer lock caps with a "closed access system" consisting of a luer access split septum system with regard to process optimization (work simplification, process time), efficiency (costs) and hygiene (patient safety). For determination of process optimization the workflow of an intervention according to the usual practice and risks was depicted in a process diagram. For determining the actual process costs, we analyzed use of material and time parameters per intervention and used the process parameters for programming the process into a simulation run (n = 1000) to determine the process costs as well as their differences (ACTUAL vs. NOMINAL) within the framework of a discrete event simulation.Additionally cultures were carried out at the TWC C-ports to evaluate possible contamination. With the closed access system, the mean working time of 5.5 minutes could be reduced to 2.97 minutes. The results for average process costs (labour and material costs per use) were 3.92 € for luer lock caps and 2.55 € for the closed access system. The hypothesis test (2-sample t-test, CI 0.95, p-value<0.05) confirmed the significance of the result.In 50 reviewed samples (TWC's), the contamination rate for the luer lock cap was 8% (4 out of 50 samples were positive), the contamination rate of the 50 samples with the closed access system was 0%.Possible hygienic risks (related to material, surroundings, staff handling) could be reduced by 65.38%. In the present research, the closed access system with a divided split septum was superior to conventional luer lock caps. The advantage of the closed access system lies in the simplified handling for staff, which results in a reduced risk of patient infection due to improved clinical hygiene.

  11. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  12. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  13. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  14. Biomechanical simulation of vocal fold dynamics in adults based on laryngeal high-speed videoendoscopy

    PubMed Central

    Gómez, Pablo; Patel, Rita R.; Alexiou, Christoph; Bohr, Christopher; Schützenberger, Anne

    2017-01-01

    Motivation Human voice is generated in the larynx by the two oscillating vocal folds. Owing to the limited space and accessibility of the larynx, endoscopic investigation of the actual phonatory process in detail is challenging. Hence the biomechanics of the human phonatory process are still not yet fully understood. Therefore, we adapt a mathematical model of the vocal folds towards vocal fold oscillations to quantify gender and age related differences expressed by computed biomechanical model parameters. Methods The vocal fold dynamics are visualized by laryngeal high-speed videoendoscopy (4000 fps). A total of 33 healthy young subjects (16 females, 17 males) and 11 elderly subjects (5 females, 6 males) were recorded. A numerical two-mass model is adapted to the recorded vocal fold oscillations by varying model masses, stiffness and subglottal pressure. For adapting the model towards the recorded vocal fold dynamics, three different optimization algorithms (Nelder–Mead, Particle Swarm Optimization and Simulated Bee Colony) in combination with three cost functions were considered for applicability. Gender differences and age-related kinematic differences reflected by the model parameters were analyzed. Results and conclusion The biomechanical model in combination with numerical optimization techniques allowed phonatory behavior to be simulated and laryngeal parameters involved to be quantified. All three optimization algorithms showed promising results. However, only one cost function seems to be suitable for this optimization task. The gained model parameters reflect the phonatory biomechanics for men and women well and show quantitative age- and gender-specific differences. The model parameters for younger females and males showed lower subglottal pressures, lower stiffness and higher masses than the corresponding elderly groups. Females exhibited higher subglottal pressures, smaller oscillation masses and larger stiffness than the corresponding similar aged male groups. Optimizing numerical models towards vocal fold oscillations is useful to identify underlying laryngeal components controlling the phonatory process. PMID:29121085

  15. Cost considerations for long-term ecological monitoring

    USGS Publications Warehouse

    Caughlan, L.; Oakley, K.L.

    2001-01-01

    For an ecological monitoring program to be successful over the long-term, the perceived benefits of the information must justify the cost. Financial limitations will always restrict the scope of a monitoring program, hence the program’s focus must be carefully prioritized. Clearly identifying the costs and benefits of a program will assist in this prioritization process, but this is easier said than done. Frequently, the true costs of monitoring are not recognized and are, therefore, underestimated. Benefits are rarely evaluated, because they are difficult to quantify. The intent of this review is to assist the designers and managers of long-term ecological monitoring programs by providing a general framework for building and operating a cost-effective program. Previous considerations of monitoring costs have focused on sampling design optimization. We present cost considerations of monitoring in a broader context. We explore monitoring costs, including both budgetary costs, what dollars are spent on, and economic costs, which include opportunity costs. Often, the largest portion of a monitoring program budget is spent on data collection, and other, critical aspects of the program, such as scientific oversight, training, data management, quality assurance, and reporting, are neglected. Recognizing and budgeting for all program costs is therefore a key factor in a program’s longevity. The close relationship between statistical issues and cost is discussed, highlighting the importance of sampling design, replication and power, and comparing the costs of alternative designs through pilot studies and simulation modeling. A monitoring program development process that includes explicit checkpoints for considering costs is presented. The first checkpoint occurs during the setting of objectives and during sampling design optimization. The last checkpoint occurs once the basic shape of the program is known, and the costs and benefits, or alternatively the cost-effectiveness, of each program element can be evaluated. Moving into the implementation phase without careful evaluation of costs and benefits is risky because if costs are later found to exceed benefits, the program will fail. The costs of development, which can be quite high, will have been largely wasted. Realistic expectations of costs and benefits will help ensure that monitoring programs survive the early, turbulent stages of development and the challenges posed by fluctuating budgets during implementation.

  16. Build infrastructure in publishing scientific journals to benefit medical scientists

    PubMed Central

    Dai, Ni; Xu, Dingyao; Zhong, Xiyao; Li, Li; Ling, Qibo

    2014-01-01

    There is urgent need for medical journals to optimize their publishing processes and strategies to satisfy the huge need for medical scientists to publish their articles, and then obtain better prestige and impact in scientific and research community. These strategies include optimizing the process of peer-review, utilizing open-access publishing models actively, finding ways of saving costs and getting revenue, smartly dealing with research fraud or misconduct, maintaining sound relationship with pharmaceutical companies, and managing to provide relevant and useful information for clinical practitioners and researchers. Scientists, publishers, societies and organizations need to work together to publish internationally renowned medical journals. PMID:24653634

  17. Service Bundle Recommendation for Person-Centered Care Planning in Cities.

    PubMed

    Kotoulas, Spyros; Daly, Elizabeth; Tommasi, Pierpaolo; Kishimoto, Akihiro; Lopez, Vanessa; Stephenson, Martin; Botea, Adi; Sbodio, Marco; Marinescu, Radu; Rooney, Ronan

    2016-01-01

    Providing appropriate support for the most vulnerable individuals carries enormous societal significance and economic burden. Yet, finding the right balance between costs, estimated effectiveness and the experience of the care recipient is a daunting task that requires considering vast amount of information. We present a system that helps care teams choose the optimal combination of providers for a set of services. We draw from techniques in Open Data processing, semantic processing, faceted exploration, visual analytics, transportation analytics and multi-objective optimization. We present an implementation of the system using data from New York City and illustrate the feasibility these technologies to guide care workers in care planning.

  18. Build infrastructure in publishing scientific journals to benefit medical scientists.

    PubMed

    Dai, Ni; Xu, Dingyao; Zhong, Xiyao; Li, Li; Ling, Qibo; Bu, Zhaode

    2014-02-01

    There is urgent need for medical journals to optimize their publishing processes and strategies to satisfy the huge need for medical scientists to publish their articles, and then obtain better prestige and impact in scientific and research community. These strategies include optimizing the process of peer-review, utilizing open-access publishing models actively, finding ways of saving costs and getting revenue, smartly dealing with research fraud or misconduct, maintaining sound relationship with pharmaceutical companies, and managing to provide relevant and useful information for clinical practitioners and researchers. Scientists, publishers, societies and organizations need to work together to publish internationally renowned medical journals.

  19. Design Optimization of Gas Generator Hybrid Propulsion Boosters

    NASA Technical Reports Server (NTRS)

    Weldon, Vincent; Phillips, Dwight; Fink, Larry

    1990-01-01

    A methodology used in support of a study for NASA/MSFC to optimize the design of gas generator hybrid propulsion booster for uprating the National Space Transportation System (NSTS) is presented. The objective was to compare alternative configurations for this booster approach, optimizing each candidate concept on different bases, in order to develop data for a trade table on which a final decision was based. The methodology is capable of processing a large number of independent and dependent variables, adjusting the overall subsystems characteristics to arrive at a best compromise integrated design to meet various specific optimization criteria subject to selected constraints. For each system considered, a detailed weight statement was generated along with preliminary cost and reliability estimates.

  20. Impact of Airspace Charges on Transatlantic Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Ng, Hok K.; Linke, Florian; Chen, Neil Y.

    2015-01-01

    Aircraft flying over the airspace of different countries are subject to over-flight charges. These charges vary from country to country. Airspace charges, while necessary to support the communication, navigation and surveillance services, may lead to aircraft flying routes longer than wind-optimal routes and produce additional carbon dioxide and other gaseous emissions. This paper develops an optimal route between city pairs by modifying the cost function to include an airspace cost whenever an aircraft flies through a controlled airspace without landing or departing from that airspace. It is assumed that the aircraft will fly the trajectory at a constant cruise altitude and constant speed. The computationally efficient optimal trajectory is derived by solving a non-linear optimal control problem. The operational strategies investigated in this study for minimizing aircraft fuel burn and emissions include flying fuel-optimal routes and flying cost-optimal routes that may completely or partially reduce airspace charges en route. The results in this paper use traffic data for transatlantic flights during July 2012. The mean daily savings in over-flight charges, fuel cost and total operation cost during the period are 17.6 percent, 1.6 percent, and 2.4 percent respectively, along the cost- optimal trajectories. The transatlantic flights can potentially save $600,000 in fuel cost plus $360,000 in over-flight charges daily by flying the cost-optimal trajectories. In addition, the aircraft emissions can be potentially reduced by 2,070 metric tons each day. The airport pairs and airspace regions that have the highest potential impacts due to airspace charges are identified for possible reduction of fuel burn and aircraft emissions for the transatlantic flights. The results in the paper show that the impact of the variation in fuel price on the optimal routes is to reduce the difference between wind-optimal and cost-optimal routes as the fuel price increases. The additional fuel consumption is quantified using the 30 percent variation in fuel prices during March 2014 to March 2015.

  1. WASTE-TO-RESOURCE: NOVEL MEMBRANE SYSTEMS FOR SAFE AND SUSTAINABLE BRINE MANAGEMENT

    EPA Science Inventory

    Decentralized waste-to-reuse systems will be optimized to maximize resource and energy recovery and minimize chemicals and energy use. This research will enhance fundamental knowledge on simultaneous heat and mass transport through membranes, lower process costs, and furthe...

  2. Numerical simulation of multi-rifled tube drawing - finding proper feedstock dimensions and tool geometry

    NASA Astrophysics Data System (ADS)

    Bella, P.; Buček, P.; Ridzoň, M.; Mojžiš, M.; Parilák, L.'

    2017-02-01

    Production of multi-rifled seamless steel tubes is quite a new technology in Železiarne Podbrezová. Therefore, a lot of technological questions emerges (process technology, input feedstock dimensions, material flow during drawing, etc.) Pilot experiments to fine tune the process cost a lot of time and energy. For this, numerical simulation would be an alternative solution for achieving optimal parameters in production technology. This would reduce the number of experiments needed, lowering the overall costs of development. However, to claim the numerical results to be relevant it is necessary to verify them against the actual plant trials. Searching for optimal input feedstock dimension for drawing of multi-rifled tube with dimensions Ø28.6 mm × 6.3 mm is what makes the main topic of this paper. As a secondary task, effective position of the plug - die couple has been solved via numerical simulation. Comparing the calculated results with actual numbers from plant trials a good agreement was observed.

  3. Assessment of the Charging Policy in Energy Efficiency of the Enterprise

    NASA Astrophysics Data System (ADS)

    Shutov, E. A.; E Turukina, T.; Anisimov, T. S.

    2017-04-01

    The forecasting problem for energy facilities with a power exceeding 670 kW is currently one of the main. In connection with rules of the retail electricity market such customers also pay for actual energy consumption deviations from plan value. In compliance with the hierarchical stages of the electricity market a guaranteeing supplier is to respect the interests of distribution and generation companies that require load leveling. The answer to this question for industrial enterprise is possible only within technological process through implementation of energy-efficient processing chains with the adaptive function and forecasting tool. In such a circumstance the primary objective of a forecasting is reduce the energy consumption costs by taking account of the energy cost correlation for 24 hours for forming of pumping unit work schedule. The pumping unit virtual model with the variable frequency drive is considered. The forecasting tool and the optimizer are integrated into typical control circuit. Economic assessment of the optimization method was estimated.

  4. Optimization of the silicon subcell for III-V on silicon multijunction solar cells: Key differences with conventional silicon technology

    NASA Astrophysics Data System (ADS)

    García-Tabarés, Elisa; Martín, Diego; García, Iván; Lelièvre, Jean François; Rey-Stolle, Ignacio

    2012-10-01

    Dual-junction solar cells formed by a GaAsP or GaInP top cell and a silicon (Si) bottom cell seem to be attractive candidates to materialize the long sought-for integration of III-V materials on Si for photovoltaic (PV) applications. Such integration would offer a cost breakthrough for PV technology, unifying the low cost of Si and the efficiency potential of III-V multijunction solar cells. The optimization of the Si solar cells properties in flat-plate PV technology is well-known; nevertheless, it has been proven that the behavior of Si substrates is different when processed in an MOVPE reactor In this study, we analyze several factors influencing the bottom subcell performance, namely, 1) the emitter formation as a result of phosphorus diffusion; 2) the passivation quality provided by the GaP nucleation layer; and 3) the process impact on the bottom subcell PV properties.

  5. Solution for a bipartite Euclidean traveling-salesman problem in one dimension

    NASA Astrophysics Data System (ADS)

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  6. Solution for a bipartite Euclidean traveling-salesman problem in one dimension.

    PubMed

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  7. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  8. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  9. One-dimensional Euclidean matching problem: exact solutions, correlation functions, and universality.

    PubMed

    Caracciolo, Sergio; Sicuro, Gabriele

    2014-10-01

    We discuss the equivalence relation between the Euclidean bipartite matching problem on the line and on the circumference and the Brownian bridge process on the same domains. The equivalence allows us to compute the correlation function and the optimal cost of the original combinatorial problem in the thermodynamic limit; moreover, we solve also the minimax problem on the line and on the circumference. The properties of the average cost and correlation functions are discussed.

  10. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.

  11. Inspection planning development: An evolutionary approach using reliability engineering as a tool

    NASA Technical Reports Server (NTRS)

    Graf, David A.; Huang, Zhaofeng

    1994-01-01

    This paper proposes an evolutionary approach for inspection planning which introduces various reliability engineering tools into the process and assess system trade-offs among reliability, engineering requirement, manufacturing capability and inspection cost to establish an optimal inspection plan. The examples presented in the paper illustrate some advantages and benefits of the new approach. Through the analysis, reliability and engineering impacts due to manufacturing process capability and inspection uncertainty are clearly understood; the most cost effective and efficient inspection plan can be established and associated risks are well controlled; some inspection reductions and relaxations are well justified; and design feedbacks and changes may be initiated from the analysis conclusion to further enhance reliability and reduce cost. The approach is particularly promising as global competitions and customer quality improvement expectations are rapidly increasing.

  12. Health services research in urology.

    PubMed

    Yu, Hua-Yin; Ulmer, William; Kowalczyk, Keith J; Hu, Jim C

    2011-06-01

    Health services research (HSR) is increasingly important given the focus on patient-centered, cost-effective, high-quality health care. We examine how HSR affects contemporary evidence-based urologic practice and its role in shaping future urologic research and care. PubMed, urologic texts, and lay literature were reviewed for terms pertaining to HSR/outcomes research and urologic disease processes. HSR is a broad discipline that focuses on access, cost, and outcomes of Health care. Its use has been applied to a myriad of urologic conditions to identify deficiencies in access, to evaluate cost-effectiveness of therapies, and to evaluate structural, process, and outcome quality measures. HSR utilizes an evidence-based approach to identify the most effective ways to organize/manage, finance, and deliver high-quality urologic care and to tailor care optimized to individuals.

  13. Guideline adherence is worth the effort: a cost-effectiveness analysis in intrauterine insemination care.

    PubMed

    Haagen, E C; Nelen, W L D M; Adang, E M; Grol, R P T M; Hermens, R P M G; Kremer, J A M

    2013-02-01

    Is optimal adherence to guideline recommendations in intrauterine insemination (IUI) care cost-effective from a societal perspective when compared with suboptimal adherence to guideline recommendations? Optimal guideline adherence in IUI care has substantial economic benefits when compared with suboptimal guideline adherence. Fertility guidelines are tools to help health-care professionals, and patients make better decisions about clinically effective, safe and cost-effective care. Up to now, there has been limited published evidence about the association between guideline adherence and cost-effectiveness in fertility care. In a retrospective cohort study involving medical record analysis and a patient survey (n = 415), interviews with staff members (n = 13) and a review of hospitals' financial department reports and literature, data were obtained about patient characteristics, process aspects and clinical outcomes of IUI care and resources consumed. In the cost-effectiveness analyses, restricted to four relevant guideline recommendations, the ongoing pregnancy rate per couple (effectiveness), the average medical and non-medical costs of IUI care, possible additional IVF treatment, pregnancy, delivery and period from birth up to 6 weeks after birth for both mother and offspring per couple (costs) and the incremental net monetary benefits were calculated to investigate if optimal guideline adherence is cost-effective from a societal perspective when compared with suboptimal guideline adherence. Seven hundred and sixty five of 1100 randomly selected infertile couples from the databases of the fertility laboratories of 10 Dutch hospitals, including 1 large university hospital providing tertiary care and 9 public hospitals providing secondary care, were willing to participate, but 350 couples were excluded because of ovulatory disorders or the use of donated spermatozoa (n = 184), still ongoing IUI treatment (n = 143) or no access to their medical records (n = 23). As a result, 415 infertile couples who started a total of 1803 IUI cycles were eligible for the cost-effectiveness analyses. Optimal adherence to the guideline recommendations about sperm quality, the total number of IUI cycles and dose of human chorionic gonadotrophin was cost-effective with an incremental net monetary benefit between € 645 and over € 7500 per couple, depending on the recommendation and assuming a willingness to pay € 20 000 for an ongoing pregnancy. Because not all recommendations applied to all 415 included couples, smaller groups were left for some of the cost-effectiveness analyses, and one integrated analysis with all recommendations within one model was impossible. Optimal guideline adherence in IUI care has substantial economic benefits when compared with suboptimal guideline adherence. For Europe, where over 144,000 IUI cycles are initiated each year to treat ≈ 32 000 infertile couples, this could mean a possible cost saving of at least 20 million euro yearly. Therefore, it is valuable to make an effort to improve guideline development and implementation.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitney, S.E.

    This presentation will examine process systems engineering R&D needs for application to advanced fossil energy (FE) systems and highlight ongoing research activities at the National Energy Technology Laboratory (NETL) under the auspices of a recently launched Collaboratory for Process & Dynamic Systems Research. The three current technology focus areas include: 1) High-fidelity systems with NETL's award-winning Advanced Process Engineering Co-Simulator (APECS) technology for integrating process simulation with computational fluid dynamics (CFD) and virtual engineering concepts, 2) Dynamic systems with R&D on plant-wide IGCC dynamic simulation, control, and real-time training applications, and 3) Systems optimization including large-scale process optimization, stochastic simulationmore » for risk/uncertainty analysis, and cost estimation. Continued R&D aimed at these and other key process systems engineering models, methods, and tools will accelerate the development of advanced gasification-based FE systems and produce increasingly valuable outcomes for DOE and the Nation.« less

  15. Layout design-based research on optimization and assessment method for shipbuilding workshop

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Meng, Mei; Liu, Shuang

    2013-06-01

    The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.

  16. Union Carbide's PECOP cops $500,000 fuel cut

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, E.

    1979-10-29

    Union Carbide's Plant Energy Cost Optimization Program (POCOP) is saving $500,000 a year at a Taft, Louisiana chemical complex. Day-to-day decisions affecting fuel costs and plant operations are based on a system of computerized data-gathering and processing. Although Carbide's system is not unique, it is more extensive and more comprehensive than the systems used by other chemical companies. The plant has decreased its energy consumption 12% below the 1972 level while increasing production by 30%. The system was initiated in response to the shift from raw materials to energy as the major production cost.

  17. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  18. The Potential of Computer Controlled Optimizing Equipment in the Wooden Furniture Industry

    Treesearch

    R. Edward Thomas; Urs Buehlmann; Urs Buehlmann

    2003-01-01

    The goal of the wooden furniture industry is to convert lumber into parts by using the most efficient and cost effective processing methods. The key steps in processing lumber arc removing the regions that contain unacceptable defects or character marks and cutting the remaining areas to the widths and lengths of needed parts. Such equipment has been used in furniture...

  19. Stochastic Adaptive Estimation and Control.

    DTIC Science & Technology

    1994-10-26

    Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and

  20. Fabrication process scale-up and optimization for a boron-aluminum composite radiator

    NASA Technical Reports Server (NTRS)

    Okelly, K. P.

    1973-01-01

    Design approaches to a practical utilization of a boron-aluminum radiator for the space shuttle orbiter are presented. The program includes studies of laboratory composite material processes to determine the feasibility of a structural and functional composite radiator panel, and to estimate the cost of its fabrication. The objective is the incorporation of boron-aluminum modulator radiator on the space shuttle.

  1. Microfluidics: a transformational tool for nanomedicine development and production.

    PubMed

    Garg, Shyam; Heuck, Gesine; Ip, Shell; Ramsay, Euan

    2016-11-01

    Microfluidic devices are mircoscale fluidic circuits used to manipulate liquids at the nanoliter scale. The ability to control the mixing of fluids and the continuous nature of the process make it apt for solvent/antisolvent precipitation of drug-delivery nanoparticles. This review describes the use of numerous microfluidic designs for the formulation and production of lipid nanoparticles, liposomes and polymer nanoparticles to encapsulate and deliver small molecule or genetic payloads. The advantages of microfluidics are illustrated through examples from literature comparing conventional processes such as beaker and T-tube mixing to microfluidic approaches. Particular emphasis is placed on examples of microfluidic nanoparticle formulations that have been tested in vitro and in vivo. Fine control of process parameters afforded by microfluidics, allows unprecedented optimization of nanoparticle quality and encapsulation efficiency. Automation improves the reproducibility and optimization of formulations. Furthermore, the continuous nature of the microfluidic process is inherently scalable, allowing optimization at low volumes, which is advantageous with scarce or costly materials, as well as scale-up through process parallelization. Given these advantages, microfluidics is poised to become the new paradigm for nanomedicine formulation and production.

  2. Minimization of energy and surface roughness of the products machined by milling

    NASA Astrophysics Data System (ADS)

    Belloufi, A.; Abdelkrim, M.; Bouakba, M.; Rezgui, I.

    2017-08-01

    Metal cutting represents a large portion in the manufacturing industries, which makes this process the largest consumer of energy. Energy consumption is an indirect source of carbon footprint, we know that CO2 emissions come from the production of energy. Therefore high energy consumption requires a large production, which leads to high cost and a large amount of CO2 emissions. At this day, a lot of researches done on the Metal cutting, but the environmental problems of the processes are rarely discussed. The right selection of cutting parameters is an effective method to reduce energy consumption because of the direct relationship between energy consumption and cutting parameters in machining processes. Therefore, one of the objectives of this research is to propose an optimization strategy suitable for machining processes (milling) to achieve the optimum cutting conditions based on the criterion of the energy consumed during the milling. In this paper the problem of energy consumed in milling is solved by an optimization method chosen. The optimization is done according to the different requirements in the process of roughing and finishing under various technological constraints.

  3. USHPRR FUEL FABRICATION PILLAR: FABRICATION STATUS, PROCESS OPTIMIZATIONS, AND FUTURE PLANS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, Jared M.; Joshi, Vineet V.; Lavender, Curt A.

    The Fuel Fabrication (FF) Pillar, a project within the U.S. High Performance Research Reactor Conversion program of the National Nuclear Security Administration’s Office of Material Management and Minimization, is tasked with the scale-up and commercialization of high-density monolithic U-Mo fuel for the conversion of appropriate research reactors to use of low-enriched fuel. The FF Pillar has made significant steps to demonstrate and optimize the baseline co-rolling process using commercial-scale equipment at both the Y-12 National Security Complex (Y-12) and BWX Technologies (BWXT). These demonstrations include the fabrication of the next irradiation experiment, Mini-Plate 1 (MP-1), and casting optimizations at Y-12.more » The FF Pillar uses a detailed process flow diagram to identify potential gaps in processing knowledge or demonstration, which helps direct the strategic research agenda of the FF Pillar. This paper describes the significant progress made toward understanding the fuel characteristics, and models developed to make informed decisions, increase process yield, and decrease lifecycle waste and costs.« less

  4. Cost analysis of an electricity supply chain using modification of price based dynamic economic dispatch in wheeling transaction scheme

    NASA Astrophysics Data System (ADS)

    Wahyuda; Santosa, Budi; Rusdiansyah, Ahmad

    2018-04-01

    Deregulation of the electricity market requires coordination between parties to synchronize the optimization on the production side (power station) and the transport side (transmission). Electricity supply chain presented in this article is designed to facilitate the coordination between the parties. Generally, the production side is optimized with price based dynamic economic dispatch (PBDED) model, while the transmission side is optimized with Multi-echelon distribution model. Both sides optimization are done separately. This article proposes a joint model of PBDED and multi-echelon distribution for the combined optimization of production and transmission. This combined optimization is important because changes in electricity demand on the customer side will cause changes to the production side that automatically also alter the transmission path. The transmission will cause two cost components. First, the cost of losses. Second, the cost of using the transmission network (wheeling transaction). Costs due to losses are calculated based on ohmic losses, while the cost of using transmission lines using the MW - mile method. As a result, this method is able to provide best allocation analysis for electrical transactions, as well as emission levels in power generation and cost analysis. As for the calculation of transmission costs, the Reverse MW-mile method produces a cheaper cost than the Absolute MW-mile method

  5. Stochastic Optimization in The Power Management of Bottled Water Production Planning

    NASA Astrophysics Data System (ADS)

    Antoro, Budi; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    This paper review a model developed to minimize production costs on bottled water production planning through stochastic optimization. As we know, that planning a management means to achieve the goal that have been applied, since each management level in the organization need a planning activities. The built models is a two-stage stochastic models that aims to minimize the cost on production of bottled water by observing that during the production process, neither interfernce nor vice versa occurs. The models were develop to minimaze production cost, assuming the availability of packing raw materials used considered to meet for each kind of bottles. The minimum cost for each kind production of bottled water are expressed in the expectation of each production with a scenario probability. The probability of uncertainly is a representation of the number of productions and the timing of power supply interruption. This is to ensure that the number of interruption that occur does not exceed the limit of the contract agreement that has been made by the company with power suppliers.

  6. Optimizing product life cycle processes in design phase

    NASA Astrophysics Data System (ADS)

    Faneye, Ola. B.; Anderl, Reiner

    2002-02-01

    Life cycle concepts do not only serve as basis in assisting product developers understand the dependencies between products and their life cycles, they also help in identifying potential opportunities for improvement in products. Common traditional concepts focus mainly on energy and material flow across life phases, necessitating the availability of metrics derived from a reference product. Knowledge of life cycle processes won from an existing product is directly reused in its redesign. Depending on sales volume nevertheless, the environmental impact before product optimization can be substantial. With modern information technologies today, computer-aided life cycle methodologies can be applied well before product use. On the basis of a virtual prototype, life cycle processes are analyzed and optimized, using simulation techniques. This preventive approach does not only help in minimizing (or even eliminating) environmental burdens caused by product, costs incurred due to changes in real product can also be avoided. The paper highlights the relationship between product and life cycle and presents a computer-based methodology for optimizing the product life cycle during design, as presented by SFB 392: Design for Environment - Methods and Tools at Technical University, Darmstadt.

  7. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  8. Design Optimization of Microalloyed Steels Using Thermodynamics Principles and Neural-Network-Based Modeling

    NASA Astrophysics Data System (ADS)

    Mohanty, Itishree; Chintha, Appa Rao; Kundu, Saurabh

    2018-06-01

    The optimization of process parameters and composition is essential to achieve the desired properties with minimal additions of alloying elements in microalloyed steels. In some cases, it may be possible to substitute such steels for those which are more richly alloyed. However, process control involves a larger number of parameters, making the relationship between structure and properties difficult to assess. In this work, neural network models have been developed to estimate the mechanical properties of steels containing Nb + V or Nb + Ti. The outcomes have been validated by thermodynamic calculations and plant data. It has been shown that subtle thermodynamic trends can be captured by the neural network model. Some experimental rolling data have also been used to support the model, which in addition has been applied to calculate the costs of optimizing microalloyed steel. The generated pareto fronts identify many combinations of strength and elongation, making it possible to select composition and process parameters for a range of applications. The ANN model and the optimization model are being used for prediction of properties in a running plant and for development of new alloys, respectively.

  9. Electrochemical oxidation of ampicillin antibiotic at boron-doped diamond electrodes and process optimization using response surface methodology.

    PubMed

    Körbahti, Bahadır K; Taşyürek, Selin

    2015-03-01

    Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.

  10. Time-driven activity-based costing to identify opportunities for cost reduction in pediatric appendectomy.

    PubMed

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2016-12-01

    As reimbursement programs shift to value-based payment models emphasizing quality and efficient healthcare delivery, there exists a need to better understand process management to unearth true costs of patient care. We sought to identify cost-reduction opportunities in simple appendicitis management by applying a time-driven activity-based costing (TDABC) methodology to this high-volume surgical condition. Process maps were created using medical record time stamps. Labor capacity cost rates were calculated using national median physician salaries, weighted nurse-patient ratios, and hospital cost data. Consumable costs for supplies, pharmacy, laboratory, and food were derived from the hospital general ledger. Time-driven activity-based costing resulted in precise per-minute calculation of personnel costs. Highest costs were in the operating room ($747.07), hospital floor ($388.20), and emergency department ($296.21). Major contributors to length of stay were emergency department evaluation (270min), operating room availability (395min), and post-operative monitoring (1128min). The TDABC model led to $1712.16 in personnel costs and $1041.23 in consumable costs for a total appendicitis cost of $2753.39. Inefficiencies in healthcare delivery can be identified through TDABC. Triage-based standing delegation orders, advanced practice providers, and same day discharge protocols are proposed cost-reducing interventions to optimize value-based care for simple appendicitis. II. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Optimizations of geothermal cycle shell and tube exchangers of various configurations with variable fluid properties and site specific fouling. [SIZEHX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, W.L.; Pines, H.S.; Silvester, L.F.

    1978-03-01

    A new heat exchanger program, SIZEHX, is described. This program allows single step multiparameter cost optimizations on single phase or supercritical exchanger arrays with variable properties and arbitrary fouling for a multitude of matrix configurations and fluids. SIZEHX uses a simplified form of Tinker's method for characterization of shell side performance; the Starling modified BWR equation for thermodynamic properties of hydrocarbons; and transport properties developed by NBS. Results of four parameter cost optimizations on exchangers for specific geothermal applications are included. The relative mix of capital cost, pumping cost, and brine cost ($/Btu) is determined for geothermal exchangers illustrating themore » invariant nature of the optimal cost distribution for fixed unit costs.« less

  12. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  13. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real-world situations, since they can dramatically reduce the computational cost of using IHMs in an iterative model evaluation process. In addition, our studies generated insights into the human-nature water conflicts in the specific study area and suggested potential solutions to address them.

  14. Interactive Genetic Algorithm - An Adaptive and Interactive Decision Support Framework for Design of Optimal Groundwater Monitoring Plans

    NASA Astrophysics Data System (ADS)

    Babbar-Sebens, M.; Minsker, B. S.

    2006-12-01

    In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives that met the DM's preference criteria, therefore allowing the expert to select among several strong candidate designs depending on her/his LTM budget, c) two of the methodologies - Case-Based Micro Interactive Genetic Algorithm (CBMIGA) and Interactive Genetic Algorithm with Mixed Initiative Interaction (IGAMII) - were also able to assist in controlling human fatigue and adapt to the DM's learning process.

  15. FMS: The New Wave of Manufacturing Technology.

    ERIC Educational Resources Information Center

    Industrial Education, 1986

    1986-01-01

    Flexible manufacturing systems (FMS) are described as a marriage of all of the latest technologies--robotics, numerical control, CAD/CAM (computer-assisted design/computer-assisted manufacturing), etc.--into a cost-efficient, optimized production process yielding the greatest flexibility in making various parts. A typical curriculum to teach FMS…

  16. Engineering Change Management Method Framework in Mechanical Engineering

    NASA Astrophysics Data System (ADS)

    Stekolschik, Alexander

    2016-11-01

    Engineering changes make an impact on different process chains in and outside the company, and lead to most error costs and time shifts. In fact, 30 to 50 per cent of development costs result from technical changes. Controlling engineering change processes can help us to avoid errors and risks, and contribute to cost optimization and a shorter time to market. This paper presents a method framework for controlling engineering changes at mechanical engineering companies. The developed classification of engineering changes and accordingly process requirements build the basis for the method framework. The developed method framework comprises two main areas: special data objects managed in different engineering IT tools and process framework. Objects from both areas are building blocks that can be selected to the overall business process based on the engineering process type and change classification. The process framework contains steps for the creation of change objects (both for overall change and for parts), change implementation, and release. Companies can select singleprocess building blocks from the framework, depending on the product development process and change impact. The developed change framework has been implemented at a division (10,000 employees) of a big German mechanical engineering company.

  17. Development and optimization of a new culture media using extruded bean as nitrogen source.

    PubMed

    Batista, Karla A; Fernandes, Kátia F

    2015-01-01

    The composition of a culture medium is one of the most important parameters to be analyzed in biotechnological processes with industrial purposes, because around 30-40% of the production costs were estimated to be accounted for the cost of the growth medium [1]. Since medium optimization using a one-factor-at-a-time approach is time-consuming, expensive, and often leads to misinterpretation of results, statistical experimental design has been applied to medium optimization for growth and metabolite production [2-5]. In this scenario, the use of mixture design to develop a culture medium containing a cheaper nitrogen source seems to be more appropriate and simple. In this sense, the focus of this work is to present a detailed description of the steps involved in the development of a optimized culture medium containing extruded bean as nitrogen source. •In a previous work we tested a development of new culture media based on the composition of YPD medium, aiming to reduce bioprocess costs as well as to improve the biomass production and heterologous expression.•The developed medium was tested for growth of Saccharomyces cerevisiae and Pichia pastoris (GS 115).•The use of culture media containing extruded bean as sole nitrogen source showed better biomass production and protein expression than those observed in the standard YPD medium.

  18. Flight plan optimization

    NASA Astrophysics Data System (ADS)

    Dharmaseelan, Anoop; Adistambha, Keyne D.

    2015-05-01

    Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

  19. Innovative Method of Analysis of Actual Cost of Work in Progress

    NASA Astrophysics Data System (ADS)

    Fil, O.; Terentev, V.

    2017-11-01

    The article focuses on the basic theory and practical aspects of improving the strategic management in terms of enhancing the quality of a technological process: these aspects have been proven experimentally by their introduction in company operations. The authors have worked out some proposals aimed at selecting an optimal supplier for building companies as well as the algorithm for the analysis and optimization of a construction company basing on scientific and practical research and the experimental data obtained in the experiment

  20. Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Li, Zhi; Starke, Michael R.

    This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintainingmore » the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.« less

Top