Sample records for identify optimal operating

  1. Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method

    NASA Astrophysics Data System (ADS)

    Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin

    2017-12-01

    Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.

  2. Optimization of wastewater treatment plant operation for greenhouse gas mitigation.

    PubMed

    Kim, Dongwook; Bowen, James D; Ozelkan, Ertunga C

    2015-11-01

    This study deals with the determination of optimal operation of a wastewater treatment system for minimizing greenhouse gas emissions, operating costs, and pollution loads in the effluent. To do this, an integrated performance index that includes three objectives was established to assess system performance. The ASMN_G model was used to perform system optimization aimed at determining a set of operational parameters that can satisfy three different objectives. The complex nonlinear optimization problem was simulated using the Nelder-Mead Simplex optimization algorithm. A sensitivity analysis was performed to identify influential operational parameters on system performance. The results obtained from the optimization simulations for six scenarios demonstrated that there are apparent trade-offs among the three conflicting objectives. The best optimized system simultaneously reduced greenhouse gas emissions by 31%, reduced operating cost by 11%, and improved effluent quality by 2% compared to the base case operation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The optimization problems of CP operation

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Stepanova, E. L.; Maximov, A. S.

    2017-11-01

    The problem of enhancing energy and economic efficiency of CP is urgent indeed. One of the main methods for solving it is optimization of CP operation. To solve the optimization problems of CP operation, Energy Systems Institute, SB of RAS, has developed a software. The software makes it possible to make optimization calculations of CP operation. The software is based on the techniques and software tools of mathematical modeling and optimization of heat and power installations. Detailed mathematical models of new equipment have been developed in the work. They describe sufficiently accurately the processes that occur in the installations. The developed models include steam turbine models (based on the checking calculation) which take account of all steam turbine compartments and regeneration system. They also enable one to make calculations with regenerative heaters disconnected. The software for mathematical modeling of equipment and optimization of CP operation has been developed. It is based on the technique for optimization of CP operating conditions in the form of software tools and integrates them in the common user interface. The optimization of CP operation often generates the need to determine the minimum and maximum possible total useful electricity capacity of the plant at set heat loads of consumers, i.e. it is necessary to determine the interval on which the CP capacity may vary. The software has been applied to optimize the operating conditions of the Novo-Irkutskaya CP of JSC “Irkutskenergo”. The efficiency of operating condition optimization and the possibility for determination of CP energy characteristics that are necessary for optimization of power system operation are shown.

  4. Operations Optimization of Nuclear Hybrid Energy Systems

    DOE PAGES

    Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk; ...

    2016-08-01

    We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less

  5. Operations Optimization of Nuclear Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Garcia, Humberto E.; Kim, Jong Suk

    We proposed a plan for nuclear hybrid energy systems (NHES) as an effective element to incorporate high penetration of clean energy. Our paper focuses on the operations optimization of two specific NHES configurations to address the variability raised from various markets and renewable generation. Both analytical and numerical approaches are used to obtain the optimization solutions. Furthermore, key economic figures of merit are evaluated under optimized and constant operations to demonstrate the benefit of the optimization, which also suggests the economic viability of considered NHES under proposed operations optimizer. Furthermore, sensitivity analysis on commodity price is conducted for better understandingmore » of considered NHES.« less

  6. Space Weather Impacts on Spacecraft Operations: Identifying and Establishing High-Priority Operational Services

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Reid, S.; Tranquille, C.; Evans, H.

    2013-12-01

    Space Weather is a multi-disciplinary and cross-domain system defined as, 'The physical and phenomenological state of natural space environments. The associated discipline aims, through observation, monitoring, analysis and modelling, at understanding and predicting the state of the Sun, the interplanetary and planetary environments, and the solar and non-solar driven perturbations that affect them, and also at forecasting and nowcasting the potential impacts on biological and technological systems'. National and Agency-level efforts to provide services addressing the myriad problems, such as ESA's SSA programme are therefore typically complex and ambitious undertakings to introduce a comprehensive suite of services aimed at a large number and broad range of end users. We focus on some of the particular threats and risks that Space Weather events pose to the Spacecraft Operations community, and the resulting implications in terms of User Requirements. We describe some of the highest-priority service elements identified as being needed by the Operations community, and outline some service components that are presently available, or under development. The particular threats and risks often vary according to orbit, so the particular User Needs for Operators at LEO, MEO and GEO are elaborated. The inter-relationship between these needed service elements and existing service components within the broader Space Weather domain is explored. Some high-priority service elements and potential correlation with Space Weather drivers include: solar array degradation and energetic proton storms; single event upsets at GEO and solar proton events and galactic cosmic rays; surface charging and deep dielectric charging at MEO and radiation belt dynamics; SEUs at LEO and the South Atlantic Anomaly and its variability. We examine the current capability to provide operational services addressing such threats and identify some advances that the Operations community can expect to benefit

  7. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  8. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  9. The Improvement of Particle Swarm Optimization: a Case Study of Optimal Operation in Goupitan Reservoir

    NASA Astrophysics Data System (ADS)

    Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui

    2018-02-01

    Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.

  10. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  11. Optimal quantum operations at zero energy cost

    NASA Astrophysics Data System (ADS)

    Chiribella, Giulio; Yang, Yuxiang

    2017-08-01

    Quantum technologies are developing powerful tools to generate and manipulate coherent superpositions of different energy levels. Envisaging a new generation of energy-efficient quantum devices, here we explore how coherence can be manipulated without exchanging energy with the surrounding environment. We start from the task of converting a coherent superposition of energy eigenstates into another. We identify the optimal energy-preserving operations, both in the deterministic and in the probabilistic scenario. We then design a recursive protocol, wherein a branching sequence of energy-preserving filters increases the probability of success while reaching maximum fidelity at each iteration. Building on the recursive protocol, we construct efficient approximations of the optimal fidelity-probability trade-off, by taking coherent superpositions of the different branches generated by probabilistic filtering. The benefits of this construction are illustrated in applications to quantum metrology, quantum cloning, coherent state amplification, and ancilla-driven computation. Finally, we extend our results to transitions where the input state is generally mixed and we apply our findings to the task of purifying quantum coherence.

  12. Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization

    NASA Astrophysics Data System (ADS)

    Civit Sabate, Carles

    In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.

  13. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  14. Operations Optimization of Hybrid Energy Systems under Variable Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Garcia, Humberto E.

    Hybrid energy systems (HES) have been proposed to be an important element to enable increasing penetration of clean energy. This paper investigates the operations flexibility of HES, and develops a methodology for operations optimization to maximize its economic value based on predicted renewable generation and market information. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value, and is illustrated by numerical results.

  15. Economic optimization of operations for hybrid energy systems under variable markets

    DOE PAGES

    Chen, Jen; Garcia, Humberto E.

    2016-05-21

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  16. Economic optimization of operations for hybrid energy systems under variable markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jen; Garcia, Humberto E.

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  17. Optimizing Reservoir Operation to Adapt to the Climate Change

    NASA Astrophysics Data System (ADS)

    Madadgar, S.; Jung, I.; Moradkhani, H.

    2010-12-01

    Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.

  18. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    NASA Astrophysics Data System (ADS)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  19. Optimized operation of dielectric laser accelerators: Multibunch

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-06-01

    We present a self-consistent analysis to determine the optimal charge, gradient, and efficiency for laser driven accelerators operating with a train of microbunches. Specifically, we account for the beam loading reduction on the material occurring at the dielectric-vacuum interface. In the case of a train of microbunches, such beam loading effect could be detrimental due to energy spread, however this may be compensated by a tapered laser pulse. We ultimately propose an optimization procedure with an analytical solution for group velocity which equals to half the speed of light. This optimization results in a maximum efficiency 20% lower than the single bunch case, and a total accelerated charge of 1 06 electrons in the train. The approach holds promise for improving operations of dielectric laser accelerators and may have an impact on emerging laser accelerators driven by high-power optical lasers.

  20. Design and development of bio-inspired framework for reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Asvini, M. Sakthi; Amudha, T.

    2017-12-01

    Frameworks for optimal reservoir operation play an important role in the management of water resources and delivery of economic benefits. Effective utilization and conservation of water from reservoirs helps to manage water deficit periods. The main challenge in reservoir optimization is to design operating rules that can be used to inform real-time decisions on reservoir release. We develop a bio-inspired framework for the optimization of reservoir release to satisfy the diverse needs of various stakeholders. In this work, single-objective optimization and multiobjective optimization problems are formulated using an algorithm known as "strawberry optimization" and tested with actual reservoir data. Results indicate that well planned reservoir operations lead to efficient deployment of the reservoir water with the help of optimal release patterns.

  1. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  2. Optimal Design and Operation of Permanent Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Oron, Gideon; Walker, Wynn R.

    1981-01-01

    Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.

  3. Optimal Reservoir Operation using Stochastic Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Sahu, R.; McLaughlin, D.

    2016-12-01

    Hydropower operations are typically designed to fulfill contracts negotiated with consumers who need reliable energy supplies, despite uncertainties in reservoir inflows. In addition to providing reliable power the reservoir operator needs to take into account environmental factors such as downstream flooding or compliance with minimum flow requirements. From a dynamical systems perspective, the reservoir operating strategy must cope with conflicting objectives in the presence of random disturbances. In order to achieve optimal performance, the reservoir system needs to continually adapt to disturbances in real time. Model Predictive Control (MPC) is a real-time control technique that adapts by deriving the reservoir release at each decision time from the current state of the system. Here an ensemble-based version of MPC (SMPC) is applied to a generic reservoir to determine both the optimal power contract, considering future inflow uncertainty, and a real-time operating strategy that attempts to satisfy the contract. Contract selection and real-time operation are coupled in an optimization framework that also defines a Pareto trade off between the revenue generated from energy production and the environmental damage resulting from uncontrolled reservoir spills. Further insight is provided by a sensitivity analysis of key parameters specified in the SMPC technique. The results demonstrate that SMPC is suitable for multi-objective planning and associated real-time operation of a wide range of hydropower reservoir systems.

  4. Optimizing Biorefinery Design and Operations via Linear Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LPmore » models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools

  5. Parametric Optimization of Some Critical Operating System Functions--An Alternative Approach to the Study of Operating Systems Design

    ERIC Educational Resources Information Center

    Sobh, Tarek M.; Tibrewal, Abhilasha

    2006-01-01

    Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…

  6. Fuzzy multiobjective models for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  7. A New Tool for Environmental and Economic Optimization of Hydropower Operations

    NASA Astrophysics Data System (ADS)

    Saha, S.; Hayse, J. W.

    2012-12-01

    facility during the demonstration. We compared the optimized virtual operation identified by the toolset to actual operations at the facility for the same time period to evaluate implications of the optimized operational regime on power/revenue generation and environmental performance. Argonne National Laboratory's work was part of a larger "Water-Use-Optimization" project supported by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy, Water Power Program, under Announcement DE-FOA-0000070. The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory ("Argonne"). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government.

  8. 48 CFR 17.604 - Identifying management and operating contracts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Identifying management and... REGULATION CONTRACTING METHODS AND CONTRACT TYPES SPECIAL CONTRACTING METHODS Management and Operating Contracts 17.604 Identifying management and operating contracts. A management and operating contract is...

  9. 48 CFR 17.604 - Identifying management and operating contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Identifying management and... REGULATION CONTRACTING METHODS AND CONTRACT TYPES SPECIAL CONTRACTING METHODS Management and Operating Contracts 17.604 Identifying management and operating contracts. A management and operating contract is...

  10. Application study of evolutionary operation methods in optimization of process parameters for mosquito coils industry

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Tambunanand, M. M.; Syahputri, K.

    2018-02-01

    Evolutionary Operation Methods (EVOP) is a method that is designed used in the process of running or operating routinely in the company to enables high productivity. Quality is one of the critical factors for a company to win the competition. Because of these conditions, the research for products quality has been done by gathering the production data of the company and make a direct observation to the factory floor especially the drying department to identify the problem which is the high water content in the mosquito incense coil. PT.X which is producing mosquito coils attempted to reduce product defects caused by the inaccuracy of operating conditions. One of the parameters of good quality insect repellent that is water content, that if the moisture content is too high then the product easy to mold and broken, and vice versa if it is too low the products are easily broken and burn shorter hours. Three factors that affect the value of the optimal water content, the stirring time, drying temperature and drying time. To obtain the required conditions Evolutionary Operation (EVOP) methods is used. Evolutionary Operation (EVOP) is used as an efficient technique for optimization of two or three variable experimental parameters using two-level factorial designs with center point. Optimal operating conditions in the experiment are stirring time performed for 20 minutes, drying temperature at 65°C, and drying time for 130 minutes. The results of the analysis based on the method of Evolutionary Operation (EVOP) value is the optimum water content of 6.90%, which indicates the value has approached the optimal in a production plant that is 7%.

  11. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  12. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  13. AIC identifies optimal representation of longitudinal dietary variables.

    PubMed

    VanBuren, John; Cavanaugh, Joseph; Marshall, Teresa; Warren, John; Levy, Steven M

    2017-09-01

    The Akaike Information Criterion (AIC) is a well-known tool for variable selection in multivariable modeling as well as a tool to help identify the optimal representation of explanatory variables. However, it has been discussed infrequently in the dental literature. The purpose of this paper is to demonstrate the use of AIC in determining the optimal representation of dietary variables in a longitudinal dental study. The Iowa Fluoride Study enrolled children at birth and dental examinations were conducted at ages 5, 9, 13, and 17. Decayed or filled surfaces (DFS) trend clusters were created based on age 13 DFS counts and age 13-17 DFS increments. Dietary intake data (water, milk, 100 percent-juice, and sugar sweetened beverages) were collected semiannually using a food frequency questionnaire. Multinomial logistic regression models were fit to predict DFS cluster membership (n=344). Multiple approaches could be used to represent the dietary data including averaging across all collected surveys or over different shorter time periods to capture age-specific trends or using the individual time points of dietary data. AIC helped identify the optimal representation. Averaging data for all four dietary variables for the whole period from age 9.0 to 17.0 provided a better representation in the multivariable full model (AIC=745.0) compared to other methods assessed in full models (AICs=750.6 for age 9 and 9-13 increment dietary measurements and AIC=762.3 for age 9, 13, and 17 individual measurements). The results illustrate that AIC can help researchers identify the optimal way to summarize information for inclusion in a statistical model. The method presented here can be used by researchers performing statistical modeling in dental research. This method provides an alternative approach for assessing the propriety of variable representation to significance-based procedures, which could potentially lead to improved research in the dental community. © 2017 American

  14. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  15. Optimization of the bank's operating portfolio

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.; Medvedev, M. A.

    2016-06-01

    The theory of efficient portfolios developed by Markowitz is used to optimize the structure of the types of financial operations of a bank (bank portfolio) in order to increase the profit and reduce the risk. The focus of this paper is to check the stability of the model to errors in the original data.

  16. Optimal reservoir operation policies using novel nested algorithms

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested

  17. A Particle Swarm Optimization Algorithm for Optimal Operating Parameters of VMI Systems in a Two-Echelon Supply Chain

    NASA Astrophysics Data System (ADS)

    Sue-Ann, Goh; Ponnambalam, S. G.

    This paper focuses on the operational issues of a Two-echelon Single-Vendor-Multiple-Buyers Supply chain (TSVMBSC) under vendor managed inventory (VMI) mode of operation. To determine the optimal sales quantity for each buyer in TSVMBC, a mathematical model is formulated. Based on the optimal sales quantity can be obtained and the optimal sales price that will determine the optimal channel profit and contract price between the vendor and buyer. All this parameters depends upon the understanding of the revenue sharing between the vendor and buyers. A Particle Swarm Optimization (PSO) is proposed for this problem. Solutions obtained from PSO is compared with the best known results reported in literature.

  18. Identifying the optimal segmentors for mass classification in mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Tomuro, Noriko; Furst, Jacob; Raicu, Daniela S.

    2015-03-01

    In this paper, we present the results of our investigation on identifying the optimal segmentor(s) from an ensemble of weak segmentors, used in a Computer-Aided Diagnosis (CADx) system which classifies suspicious masses in mammograms as benign or malignant. This is an extension of our previous work, where we used various parameter settings of image enhancement techniques to each suspicious mass (region of interest (ROI)) to obtain several enhanced images, then applied segmentation to each image to obtain several contours of a given mass. Each segmentation in this ensemble is essentially a "weak segmentor" because no single segmentation can produce the optimal result for all images. Then after shape features are computed from the segmented contours, the final classification model was built using logistic regression. The work in this paper focuses on identifying the optimal segmentor(s) from an ensemble mix of weak segmentors. For our purpose, optimal segmentors are those in the ensemble mix which contribute the most to the overall classification rather than the ones that produced high precision segmentation. To measure the segmentors' contribution, we examined weights on the features in the derived logistic regression model and computed the average feature weight for each segmentor. The result showed that, while in general the segmentors with higher segmentation success rates had higher feature weights, some segmentors with lower segmentation rates had high classification feature weights as well.

  19. Modeling of District Heating Networks for the Purpose of Operational Optimization with Thermal Energy Storage

    NASA Astrophysics Data System (ADS)

    Leśko, Michał; Bujalski, Wojciech

    2017-12-01

    The aim of this document is to present the topic of modeling district heating systems in order to enable optimization of their operation, with special focus on thermal energy storage in the pipelines. Two mathematical models for simulation of transient behavior of district heating networks have been described, and their results have been compared in a case study. The operational optimization in a DH system, especially if this system is supplied from a combined heat and power plant, is a difficult and complicated task. Finding a global financial optimum requires considering long periods of time and including thermal energy storage possibilities into consideration. One of the most interesting options for thermal energy storage is utilization of thermal inertia of the network itself. This approach requires no additional investment, while providing significant possibilities for heat load shifting. It is not feasible to use full topological models of the networks, comprising thousands of substations and network sections, for the purpose of operational optimization with thermal energy storage, because such models require long calculation times. In order to optimize planned thermal energy storage actions, it is necessary to model the transient behavior of the network in a very simple way - allowing for fast and reliable calculations. Two approaches to building such models have been presented. Both have been tested by comparing the results of simulation of the behavior of the same network. The characteristic features, advantages and disadvantages of both kinds of models have been identified. The results can prove useful for district heating system operators in the near future.

  20. Seasonal-Scale Optimization of Conventional Hydropower Operations in the Upper Colorado System

    NASA Astrophysics Data System (ADS)

    Bier, A.; Villa, D.; Sun, A.; Lowry, T. S.; Barco, J.

    2011-12-01

    Sandia National Laboratories is developing the Hydropower Seasonal Concurrent Optimization for Power and the Environment (Hydro-SCOPE) tool to examine basin-wide conventional hydropower operations at seasonal time scales. This tool is part of an integrated, multi-laboratory project designed to explore different aspects of optimizing conventional hydropower operations. The Hydro-SCOPE tool couples a one-dimensional reservoir model with a river routing model to simulate hydrology and water quality. An optimization engine wraps around this model framework to solve for long-term operational strategies that best meet the specific objectives of the hydrologic system while honoring operational and environmental constraints. The optimization routines are provided by Sandia's open source DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) software. Hydro-SCOPE allows for multi-objective optimization, which can be used to gain insight into the trade-offs that must be made between objectives. The Hydro-SCOPE tool is being applied to the Upper Colorado Basin hydrologic system. This system contains six reservoirs, each with its own set of objectives (such as maximizing revenue, optimizing environmental indicators, meeting water use needs, or other objectives) and constraints. This leads to a large optimization problem with strong connectedness between objectives. The systems-level approach used by the Hydro-SCOPE tool allows simultaneous analysis of these objectives, as well as understanding of potential trade-offs related to different objectives and operating strategies. The seasonal-scale tool will be tightly integrated with the other components of this project, which examine day-ahead and real-time planning, environmental performance, hydrologic forecasting, and plant efficiency.

  1. Optimal Trajectories for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Chen, Robert T. N.

    1996-01-01

    This paper presents a summary of a series of recent analytical studies conducted to investigate One-Engine-Inoperative (OEI) optimal control strategies and the associated optimal trajectories for a twin engine helicopter in Category-A terminal-area operations. These studies also examine the associated heliport size requirements and the maximum gross weight capability of the helicopter. Using an eight states, two controls, augmented point-mass model representative of the study helicopter, Continued TakeOff (CTO), Rejected TakeOff (RTO), Balked Landing (BL), and Continued Landing (CL) are investigated for both Vertical-TakeOff-and-Landing (VTOL) and Short-TakeOff-and-Landing (STOL) terminal-area operations. The formulation of the nonlinear optimal control problems with considerations for realistic constraints, solution methods for the two-point boundary-value problem, a new real-time generation method for the optimal OEI trajectories, and the main results of this series of trajectory optimization studies are presented. In particular, a new balanced- weight concept for determining the takeoff decision point for VTOL Category-A operations is proposed, extending the balanced-field length concept used for STOL operations.

  2. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider

  3. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    NASA Astrophysics Data System (ADS)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is

  4. The optimization of nuclear power plants operation modes in emergency situations

    NASA Astrophysics Data System (ADS)

    Zagrebayev, A. M.; Trifonenkov, A. V.; Ramazanov, R. N.

    2018-01-01

    An emergency situations resulting in the necessity for temporary reactor trip may occur at the nuclear power plant while normal operating mode. The paper deals with some of the operation c aspects of nuclear power plant operation in emergency situations and during threatened period. The xenon poisoning causes limitations on the variety of statements of the problem of calculating characteristics of a set of optimal reactor power off controls. The article show a possibility and feasibility of new sets of optimization tasks for the operation of nuclear power plants under conditions of xenon poisoning in emergency circumstances.

  5. Synergy optimization and operation management on syndicate complementary knowledge cooperation

    NASA Astrophysics Data System (ADS)

    Tu, Kai-Jan

    2014-10-01

    The number of multi enterprises knowledge cooperation has grown steadily, as a result of global innovation competitions. I have conducted research based on optimization and operation studies in this article, and gained the conclusion that synergy management is effective means to break through various management barriers and solve cooperation's chaotic systems. Enterprises must communicate system vision and access complementary knowledge. These are crucial considerations for enterprises to exert their optimization and operation knowledge cooperation synergy to meet global marketing challenges.

  6. A CPS Based Optimal Operational Control System for Fused Magnesium Furnace

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Tian-you; Wu, Zhi-wei; Wang, Hong

    Fused magnesia smelting for fused magnesium furnace (FMF) is an energy intensive process with high temperature and comprehensive complexities. Its operational index namely energy consumption per ton (ECPT) is defined as the consumed electrical energy per ton of acceptable quality and is difficult to measure online. Moreover, the dynamics of ECPT cannot be precisely modelled mathematically. The model parameters of the three-phase currents of the electrodes such as the molten pool level, its variation rate and resistance are uncertain and nonlinear functions of the changes in both the smelting process and the raw materials composition. In this paper, an integratedmore » optimal operational control algorithm proposed is composed of a current set-point control, a current switching control and a self-optimized tuning mechanism. The tight conjoining of and coordination between the computational resources including the integrated optimal operational control, embedded software, industrial cloud, wireless communication and the physical resources of FMF constitutes a cyber-physical system (CPS) based embedded optimal operational control system. Successful application of this system has been made for a production line with ten fused magnesium furnaces in a factory in China, leading to a significant reduced ECPT.« less

  7. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  8. Nickel-Cadmium Battery Operation Management Optimization Using Robust Design

    NASA Technical Reports Server (NTRS)

    Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador

    1996-01-01

    In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.

  9. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue

  10. Target-based optimization of advanced gravitational-wave detector network operations

    NASA Astrophysics Data System (ADS)

    Szölgyén, Á.; Dálya, G.; Gondán, L.; Raffai, P.

    2017-04-01

    We introduce two novel time-dependent figures of merit for both online and offline optimizations of advanced gravitational-wave (GW) detector network operations with respect to (i) detecting continuous signals from known source locations and (ii) detecting GWs of neutron star binary coalescences from known local galaxies, which thereby have the highest potential for electromagnetic counterpart detection. For each of these scientific goals, we characterize an N-detector network, and all its (N  -  1)-detector subnetworks, to identify subnetworks and individual detectors (key contributors) that contribute the most to achieving the scientific goal. Our results show that aLIGO-Hanford is expected to be the key contributor in 2017 to the goal of detecting GWs from the Crab pulsar within the network of LIGO and Virgo detectors. For the same time period and for the same network, both LIGO detectors are key contributors to the goal of detecting GWs from the Vela pulsar, as well as to detecting signals from 10 high interest pulsars. Key contributors to detecting continuous GWs from the Galactic Center can only be identified for finite time intervals within each sidereal day with either the 3-detector network of the LIGO and Virgo detectors in 2017, or the 4-detector network of the LIGO, Virgo, and KAGRA detectors in 2019-2020. Characterization of the LIGO-Virgo detectors with respect to goal (ii) identified the two LIGO detectors as key contributors. Additionally, for all analyses, we identify time periods within a day when lock losses or scheduled service operations could result with the least amount of signal-to-noise or transient detection probability loss for a detector network.

  11. System and method of cylinder deactivation for optimal engine torque-speed map operation

    DOEpatents

    Sujan, Vivek A; Frazier, Timothy R; Follen, Kenneth; Moon, Suk-Min

    2014-11-11

    This disclosure provides a system and method for determining cylinder deactivation in a vehicle engine to optimize fuel consumption while providing the desired or demanded power. In one aspect, data indicative of terrain variation is utilized in determining a vehicle target operating state. An optimal active cylinder distribution and corresponding fueling is determined from a recommendation from a supervisory agent monitoring the operating state of the vehicle of a subset of the total number of cylinders, and a determination as to which number of cylinders provides the optimal fuel consumption. Once the optimal cylinder number is determined, a transmission gear shift recommendation is provided in view of the determined active cylinder distribution and target operating state.

  12. An ant colony optimization based algorithm for identifying gene regulatory elements.

    PubMed

    Liu, Wei; Chen, Hanwu; Chen, Ling

    2013-08-01

    It is one of the most important tasks in bioinformatics to identify the regulatory elements in gene sequences. Most of the existing algorithms for identifying regulatory elements are inclined to converge into a local optimum, and have high time complexity. Ant Colony Optimization (ACO) is a meta-heuristic method based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of real ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper designs and implements an ACO based algorithm named ACRI (ant-colony-regulatory-identification) for identifying all possible binding sites of transcription factor from the upstream of co-expressed genes. To accelerate the ants' searching process, a strategy of local optimization is presented to adjust the ants' start positions on the searched sequences. By exploiting the powerful optimization ability of ACO, the algorithm ACRI can not only improve precision of the results, but also achieve a very high speed. Experimental results on real world datasets show that ACRI can outperform other traditional algorithms in the respects of speed and quality of solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  14. Near-Optimal Operation of Dual-Fuel Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Chou, H. C.; Bowles, J. V.

    1996-01-01

    A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.

  15. Petroleum refinery operational planning using robust optimization

    NASA Astrophysics Data System (ADS)

    Leiras, A.; Hamacher, S.; Elkamel, A.

    2010-12-01

    In this article, the robust optimization methodology is applied to deal with uncertainties in the prices of saleable products, operating costs, product demand, and product yield in the context of refinery operational planning. A numerical study demonstrates the effectiveness of the proposed robust approach. The benefits of incorporating uncertainty in the different model parameters were evaluated in terms of the cost of ignoring uncertainty in the problem. The calculations suggest that this benefit is equivalent to 7.47% of the deterministic solution value, which indicates that the robust model may offer advantages to those involved with refinery operational planning. In addition, the probability bounds of constraint violation are calculated to help the decision-maker adopt a more appropriate parameter to control robustness and judge the tradeoff between conservatism and total profit.

  16. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  17. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  18. Optimal Trajectories and Control Strategies for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Zhao, Yi-Yuan; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    Engine failure represents a major safety concern to helicopter operations, especially in the critical flight phases of takeoff and landing from/to small, confined areas. As a result, the JAA and FAA both certificate a transport helicopter as either Category-A or Category-B according to the ability to continue its operations following engine failures. A Category-B helicopter must be able to land safely in the event of one or all engine failures. There is no requirement, however, for continued flight capability. In contrast, Category-A certification, which applies to multi-engine transport helicopters with independent engine systems, requires that they continue the flight with one engine inoperative (OEI). These stringent requirements, while permitting its operations from rooftops and oil rigs and flight to areas where no emergency landing sites are available, restrict the payload of a Category-A transport helicopter to a value safe for continued flight as well as for landing with one engine inoperative. The current certification process involves extensive flight tests, which are potentially dangerous, costly, and time consuming. These tests require the pilot to simulate engine failures at increasingly critical conditions, Flight manuals based on these tests tend to provide very conservative recommendations with regard to maximum takeoff weight or required runway length. There are very few theoretical studies on this subject to identify the fundamental parameters and tradeoff factors involved. Furthermore, a capability for real-time generation of OEI optimal trajectories is very desirable for providing timely cockpit display guidance to assist the pilot in reducing his workload and to increase safety in a consistent and reliable manner. A joint research program involving NASA Ames Research Center, the FAA, and the University of Minnesota is being conducted to determine OEI optimal control strategies and the associated optimal,trajectories for continued takeoff (CTO

  19. Economic Optimization Analysis of Chengdu Electric Community Bus Operation

    NASA Astrophysics Data System (ADS)

    Yidong, Wang; Yun, Cai; Zhengping, Tan; Xiong, Wan

    2018-03-01

    In recent years, the government has strongly supported and promoted electric vehicles and has given priority to demonstration and popularization in the field of public transport. The economy of public transport operations has drawn increasing attention. In this paper, Chengdu wireless charging pure electric community bus is used as the research object, the battery, air conditioning, driver’s driving behavior and other economic influence factors were analyzed, and optimizing the operation plan through case data analysis, through the reasonable battery matching and mode of operation to help businesses effectively save operating costs and enhance economic efficiency.

  20. Application of dragonfly algorithm for optimal performance analysis of process parameters in turn-mill operations- A case study

    NASA Astrophysics Data System (ADS)

    Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT

    2018-02-01

    Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).

  1. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  2. Online Optimization Method for Operation of Generators in a Micro Grid

    NASA Astrophysics Data System (ADS)

    Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi

    Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.

  3. Study on Operation Optimization of Pumping Station's 24 Hours Operation under Influences of Tides and Peak-Valley Electricity Prices

    NASA Astrophysics Data System (ADS)

    Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang

    2010-06-01

    According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.

  4. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    PubMed

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Minimizing the health and climate impacts of emissions from heavy-duty public transportation bus fleets through operational optimization.

    PubMed

    Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J

    2013-04-16

    In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.

  6. Methods and devices for optimizing the operation of a semiconductor optical modulator

    DOEpatents

    Zortman, William A.

    2015-07-14

    A semiconductor-based optical modulator includes a control loop to control and optimize the modulator's operation for relatively high data rates (above 1 GHz) and/or relatively high voltage levels. Both the amplitude of the modulator's driving voltage and the bias of the driving voltage may be adjusted using the control loop. Such adjustments help to optimize the operation of the modulator by reducing the number of errors present in a modulated data stream.

  7. Nonlinear Burn Control and Operating Point Optimization in ITER

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Schuster, Eugenio

    2013-10-01

    Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).

  8. An Optimization Study of Hot Stamping Operation

    NASA Astrophysics Data System (ADS)

    Ghoo, Bonyoung; Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu; Averill, Ron

    2010-06-01

    In the present study, 3-dimensional finite element analyses for hot-stamping processes of Audi B-pillar product are conducted using JSTAMP/NV and HEEDS. Special attention is paid to the optimization of simulation technology coupling with thermal-mechanical formulations. Numerical simulation based on FEM technology and optimization design using the hybrid adaptive SHERPA algorithm are applied to hot stamping operation to improve productivity. The robustness of the SHERPA algorithm is found through the results of the benchmark example. The SHERPA algorithm is shown to be far superior to the GA (Genetic Algorithm) in terms of efficiency, whose calculation time is about 7 times faster than that of the GA. The SHERPA algorithm could show high performance in a large scale problem having complicated design space and long calculation time.

  9. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  10. Assessing the Value of Information for Identifying Optimal Floodplain Management Portfolios

    NASA Astrophysics Data System (ADS)

    Read, L.; Bates, M.; Hui, R.; Lund, J. R.

    2014-12-01

    Floodplain management is a complex portfolio problem that can be analyzed from an integrated perspective incorporating traditionally structural and nonstructural options. One method to identify effective strategies for preparing, responding to, and recovering from floods is to optimize for a portfolio of temporary (emergency) and permanent floodplain management options. A risk-based optimization approach to this problem assigns probabilities to specific flood events and calculates the associated expected damages. This approach is currently limited by: (1) the assumption of perfect flood forecast information, i.e. implementing temporary management activities according to the actual flood event may differ from optimizing based on forecasted information and (2) the inability to assess system resilience across a range of possible future events (risk-centric approach). Resilience is defined here as the ability of a system to absorb and recover from a severe disturbance or extreme event. In our analysis, resilience is a system property that requires integration of physical, social, and information domains. This work employs a 3-stage linear program to identify the optimal mix of floodplain management options using conditional probabilities to represent perfect and imperfect flood stages (forecast vs. actual events). We assess the value of information in terms of minimizing damage costs for two theoretical cases - urban and rural systems. We use portfolio analysis to explore how the set of optimal management options differs depending on whether the goal is for the system to be risk-adverse to a specified event or resilient over a range of events.

  11. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  12. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP

    PubMed Central

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality. PMID:27999590

  13. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.

    PubMed

    Mohsen, Abdulqader M

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.

  14. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  15. Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Welch, Greg

    2012-05-24

    To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.

  16. Optimizing and controlling earthmoving operations using spatial technologies

    NASA Astrophysics Data System (ADS)

    Alshibani, Adel

    This thesis presents a model designed for optimizing, tracking, and controlling earthmoving operations. The proposed model utilizes, Genetic Algorithm (GA), Linear Programming (LP), and spatial technologies including Global Positioning Systems (GPS) and Geographic Information Systems (GIS) to support the management functions of the developed model. The model assists engineers and contractors in selecting near optimum crew formations in planning phase and during construction, using GA and LP supported by the Pathfinder Algorithm developed in a GIS environment. GA is used in conjunction with a set of rules developed to accelerate the optimization process and to avoid generating and evaluating hypothetical and unrealistic crew formations. LP is used to determine quantities of earth to be moved from different borrow pits and to be placed at different landfill sites to meet project constraints and to minimize the cost of these earthmoving operations. On the one hand, GPS is used for onsite data collection and for tracking construction equipment in near real-time. On the other hand, GIS is employed to automate data acquisition and to analyze the collected spatial data. The model is also capable of reconfiguring crew formations dynamically during the construction phase while site operations are in progress. The optimization of the crew formation considers: (1) construction time, (2) construction direct cost, or (3) construction total cost. The model is also capable of generating crew formations to meet, as close as possible, specified time and/or cost constraints. In addition, the model supports tracking and reporting of project progress utilizing the earned-value concept and the project ratio method with modifications that allow for more accurate forecasting of project time and cost at set future dates and at completion. The model is capable of generating graphical and tabular reports. The developed model has been implemented in prototype software, using Object

  17. [Workflow management in the operating room. Analysis of potentials for optimizing efficiency at a university hospital].

    PubMed

    Welker, A; Wolcke, B; Schleppers, A; Schmeck, S B; Focke, U; Gervais, H W; Schmeck, J

    2010-10-01

    The introduction of the diagnosis-related groups reimbursement system has increased cost pressures. Due to the interaction of many different professional groups, analysis and optimization of internal coordination and scheduling in the operating room (OR) is mandatory. The aim of this study was to analyze the processes at a university hospital in order to optimize strategies by identifying potential weak points. Over a period 6 weeks before and 4 weeks after intervention processes time intervals in the OR of a tertiary care hospital (university hospital) were documented in a structured data collection sheet. The main reason for lack of efficiency of labor was underused OR utilization. Multifactorial reasons, particularly in the management of perioperative interfaces, led to vacant ORs. A significant deficit was in the use of OR capacity at the end of the daily OR schedule. After harmonization of working hours of different staff groups and implementation of several other changes an increase in efficiency could be verified. These results indicate that optimization of perioperative processes considerably contribute to the success of OR organization. Additionally, the implementation of standard operating procedures and a generally accepted OR statute are mandatory. In this way an efficient OR management can contribute to the economic success of a hospital.

  18. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  19. Energy and operation management of a microgrid using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Radosavljević, Jordan; Jevtić, Miroljub; Klimenta, Dardan

    2016-05-01

    This article presents an efficient algorithm based on particle swarm optimization (PSO) for energy and operation management (EOM) of a microgrid including different distributed generation units and energy storage devices. The proposed approach employs PSO to minimize the total energy and operating cost of the microgrid via optimal adjustment of the control variables of the EOM, while satisfying various operating constraints. Owing to the stochastic nature of energy produced from renewable sources, i.e. wind turbines and photovoltaic systems, as well as load uncertainties and market prices, a probabilistic approach in the EOM is introduced. The proposed method is examined and tested on a typical grid-connected microgrid including fuel cell, gas-fired microturbine, wind turbine, photovoltaic and energy storage devices. The obtained results prove the efficiency of the proposed approach to solve the EOM of the microgrids.

  20. Optimal line drop compensation parameters under multi-operating conditions

    NASA Astrophysics Data System (ADS)

    Wan, Yuan; Li, Hang; Wang, Kai; He, Zhe

    2017-01-01

    Line Drop Compensation (LDC) is a main function of Reactive Current Compensation (RCC) which is developed to improve voltage stability. While LDC has benefit to voltage, it may deteriorate the small-disturbance rotor angle stability of power system. In present paper, an intelligent algorithm which is combined by Genetic Algorithm (GA) and Backpropagation Neural Network (BPNN) is proposed to optimize parameters of LDC. The objective function proposed in present paper takes consideration of voltage deviation and power system oscillation minimal damping ratio under multi-operating conditions. A simulation based on middle area of Jiangxi province power system is used to demonstrate the intelligent algorithm. The optimization result shows that coordinate optimized parameters can meet the multioperating conditions requirement and improve voltage stability as much as possible while guaranteeing enough damping ratio.

  1. Optimization of the Brillouin operator on the KNL architecture

    NASA Astrophysics Data System (ADS)

    Dürr, Stephan

    2018-03-01

    Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.

  2. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less

  3. A method for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Ai, Xueshan; Dong, Zuo; Mo, Mingzhu

    2017-04-01

    The optimal reservoir operation is in generally a multi-objective problem. In real life, most of the reservoir operation optimization problems involve conflicting objectives, for which there is no single optimal solution which can simultaneously gain an optimal result of all the purposes, but rather a set of well distributed non-inferior solutions or Pareto frontier exists. On the other hand, most of the reservoirs operation rules is to gain greater social and economic benefits at the expense of ecological environment, resulting to the destruction of riverine ecology and reduction of aquatic biodiversity. To overcome these drawbacks, this study developed a multi-objective model for the reservoir operating with the conflicting functions of hydroelectric energy generation, irrigation and ecological protection. To solve the model with the objectives of maximize energy production, maximize the water demand satisfaction rate of irrigation and ecology, we proposed a multi-objective optimization method of variable penalty coefficient (VPC), which was based on integrate dynamic programming (DP) with discrete differential dynamic programming (DDDP), to generate a well distributed non-inferior along the Pareto front by changing the penalties coefficient of different objectives. This method was applied to an existing China reservoir named Donggu, through a course of a year, which is a multi-annual storage reservoir with multiple purposes. The case study results showed a good relationship between any two of the objectives and a good Pareto optimal solutions, which provide a reference for the reservoir decision makers.

  4. Modeling Reservoir-River Networks in Support of Optimizing Seasonal-Scale Reservoir Operations

    NASA Astrophysics Data System (ADS)

    Villa, D. L.; Lowry, T. S.; Bier, A.; Barco, J.; Sun, A.

    2011-12-01

    HydroSCOPE (Hydropower Seasonal Concurrent Optimization of Power and the Environment) is a seasonal time-scale tool for scenario analysis and optimization of reservoir-river networks. Developed in MATLAB, HydroSCOPE is an object-oriented model that simulates basin-scale dynamics with an objective of optimizing reservoir operations to maximize revenue from power generation, reliability in the water supply, environmental performance, and flood control. HydroSCOPE is part of a larger toolset that is being developed through a Department of Energy multi-laboratory project. This project's goal is to provide conventional hydropower decision makers with better information to execute their day-ahead and seasonal operations and planning activities by integrating water balance and operational dynamics across a wide range of spatial and temporal scales. This presentation details the modeling approach and functionality of HydroSCOPE. HydroSCOPE consists of a river-reservoir network model and an optimization routine. The river-reservoir network model simulates the heat and water balance of river-reservoir networks for time-scales up to one year. The optimization routine software, DAKOTA (Design Analysis Kit for Optimization and Terascale Applications - dakota.sandia.gov), is seamlessly linked to the network model and is used to optimize daily volumetric releases from the reservoirs to best meet a set of user-defined constraints, such as maximizing revenue while minimizing environmental violations. The network model uses 1-D approximations for both the reservoirs and river reaches and is able to account for surface and sediment heat exchange as well as ice dynamics for both models. The reservoir model also accounts for inflow, density, and withdrawal zone mixing, and diffusive heat exchange. Routing for the river reaches is accomplished using a modified Muskingum-Cunge approach that automatically calculates the internal timestep and sub-reach lengths to match the conditions of

  5. Driving external chemistry optimization via operations management principles.

    PubMed

    Bi, F Christopher; Frost, Heather N; Ling, Xiaolan; Perry, David A; Sakata, Sylvie K; Bailey, Simon; Fobian, Yvette M; Sloan, Leslie; Wood, Anthony

    2014-03-01

    Confronted with the need to significantly raise the productivity of remotely located chemistry CROs Pfizer embraced a commitment to continuous improvement which leveraged the tools from both Lean Six Sigma and queue management theory to deliver positive measurable outcomes. During 2012 cycle times were reduced by 48% by optimization of the work in progress and conducting a detailed workflow analysis to identify and address pinch points. Compound flow was increased by 29% by optimizing the request process and de-risking the chemistry. Underpinning both achievements was the development of close working relationships and productive communications between Pfizer and CRO chemists. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A game theory-reinforcement learning (GT-RL) method to develop optimal operation policies for multi-operator reservoir systems

    NASA Astrophysics Data System (ADS)

    Madani, Kaveh; Hooshyar, Milad

    2014-11-01

    Reservoir systems with multiple operators can benefit from coordination of operation policies. To maximize the total benefit of these systems the literature has normally used the social planner's approach. Based on this approach operation decisions are optimized using a multi-objective optimization model with a compound system's objective. While the utility of the system can be increased this way, fair allocation of benefits among the operators remains challenging for the social planner who has to assign controversial weights to the system's beneficiaries and their objectives. Cooperative game theory provides an alternative framework for fair and efficient allocation of the incremental benefits of cooperation. To determine the fair and efficient utility shares of the beneficiaries, cooperative game theory solution methods consider the gains of each party in the status quo (non-cooperation) as well as what can be gained through the grand coalition (social planner's solution or full cooperation) and partial coalitions. Nevertheless, estimation of the benefits of different coalitions can be challenging in complex multi-beneficiary systems. Reinforcement learning can be used to address this challenge and determine the gains of the beneficiaries for different levels of cooperation, i.e., non-cooperation, partial cooperation, and full cooperation, providing the essential input for allocation based on cooperative game theory. This paper develops a game theory-reinforcement learning (GT-RL) method for determining the optimal operation policies in multi-operator multi-reservoir systems with respect to fairness and efficiency criteria. As the first step to underline the utility of the GT-RL method in solving complex multi-agent multi-reservoir problems without a need for developing compound objectives and weight assignment, the proposed method is applied to a hypothetical three-agent three-reservoir system.

  7. Power-limited low-thrust trajectory optimization with operation point detection

    NASA Astrophysics Data System (ADS)

    Chi, Zhemin; Li, Haiyang; Jiang, Fanghua; Li, Junfeng

    2018-06-01

    The power-limited solar electric propulsion system is considered more practical in mission design. An accurate mathematical model of the propulsion system, based on experimental data of the power generation system, is used in this paper. An indirect method is used to deal with the time-optimal and fuel-optimal control problems, in which the solar electric propulsion system is described using a finite number of operation points, which are characterized by different pairs of thruster input power. In order to guarantee the integral accuracy for the discrete power-limited problem, a power operation detection technique is embedded in the fourth-order Runge-Kutta algorithm with fixed step. Moreover, the logarithmic homotopy method and normalization technique are employed to overcome the difficulties caused by using indirect methods. Three numerical simulations with actual propulsion systems are given to substantiate the feasibility and efficiency of the proposed method.

  8. Supply-Chain Optimization Template

    NASA Technical Reports Server (NTRS)

    Quiett, William F.; Sealing, Scott L.

    2009-01-01

    The Supply-Chain Optimization Template (SCOT) is an instructional guide for identifying, evaluating, and optimizing (including re-engineering) aerospace- oriented supply chains. The SCOT was derived from the Supply Chain Council s Supply-Chain Operations Reference (SCC SCOR) Model, which is more generic and more oriented toward achieving a competitive advantage in business.

  9. Optimization and planning of operating theatre activities: an original definition of pathways and process modeling.

    PubMed

    Barbagallo, Simone; Corradi, Luca; de Ville de Goyet, Jean; Iannucci, Marina; Porro, Ivan; Rosso, Nicola; Tanfani, Elena; Testi, Angela

    2015-05-17

    The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery. A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning. The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%. The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of

  10. Distributed Energy Systems Integration and Demand Optimization for Autonomous Operations and Electric Grid Transactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghatikar, Girish; Mashayekh, Salman; Stadler, Michael

    Distributed power systems in the U.S. and globally are evolving to provide reliable and clean energy to consumers. In California, existing regulations require significant increases in renewable generation, as well as identification of customer-side distributed energy resources (DER) controls, communication technologies, and standards for interconnection with the electric grid systems. As DER deployment expands, customer-side DER control and optimization will be critical for system flexibility and demand response (DR) participation, which improves the economic viability of DER systems. Current DER systems integration and communication challenges include leveraging the existing DER and DR technology and systems infrastructure, and enabling optimized cost,more » energy and carbon choices for customers to deploy interoperable grid transactions and renewable energy systems at scale. Our paper presents a cost-effective solution to these challenges by exploring communication technologies and information models for DER system integration and interoperability. This system uses open standards and optimization models for resource planning based on dynamic-pricing notifications and autonomous operations within various domains of the smart grid energy system. It identifies architectures and customer engagement strategies in dynamic DR pricing transactions to generate feedback information models for load flexibility, load profiles, and participation schedules. The models are tested at a real site in California—Fort Hunter Liggett (FHL). Furthermore, our results for FHL show that the model fits within the existing and new DR business models and networked systems for transactive energy concepts. Integrated energy systems, communication networks, and modeling tools that coordinate supply-side networks and DER will enable electric grid system operators to use DER for grid transactions in an integrated system.« less

  11. Optimal Hedging Rule for Reservoir Refill Operation

    NASA Astrophysics Data System (ADS)

    Wan, W.; Zhao, J.; Lund, J. R.; Zhao, T.; Lei, X.; Wang, H.

    2015-12-01

    This paper develops an optimal reservoir Refill Hedging Rule (RHR) for combined water supply and flood operation using mathematical analysis. A two-stage model is developed to formulate the trade-off between operations for conservation benefit and flood damage in the reservoir refill season. Based on the probability distribution of the maximum refill water availability at the end of the second stage, three zones are characterized according to the relationship among storage capacity, expected storage buffer (ESB), and maximum safety excess discharge (MSED). The Karush-Kuhn-Tucker conditions of the model show that the optimality of the refill operation involves making the expected marginal loss of conservation benefit from unfilling (i.e., ending storage of refill period less than storage capacity) as nearly equal to the expected marginal flood damage from levee overtopping downstream as possible while maintaining all constraints. This principle follows and combines the hedging rules for water supply and flood management. A RHR curve is drawn analogously to water supply hedging and flood hedging rules, showing the trade-off between the two objectives. The release decision result has a linear relationship with the current water availability, implying the linearity of RHR for a wide range of water conservation functions (linear, concave, or convex). A demonstration case shows the impacts of factors. Larger downstream flood conveyance capacity and empty reservoir capacity allow a smaller current release and more water can be conserved. Economic indicators of conservation benefit and flood damage compete with each other on release, the greater economic importance of flood damage is, the more water should be released in the current stage, and vice versa. Below a critical value, improving forecasts yields less water release, but an opposing effect occurs beyond this critical value. Finally, the Danjiangkou Reservoir case study shows that the RHR together with a rolling

  12. Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2014-10-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.

  13. Optimal anthropometric measures and thresholds to identify undiagnosed type 2 diabetes in three major Asian ethnic groups.

    PubMed

    Alperet, Derrick Johnston; Lim, Wei-Yen; Mok-Kwee Heng, Derrick; Ma, Stefan; van Dam, Rob M

    2016-10-01

    To identify optimal anthropometric measures and cutoffs to identify undiagnosed diabetes mellitus (UDM) in three major Asian ethnic groups (Chinese, Malays, and Asian-Indians). Cross-sectional data were analyzed from 14,815 ethnic Chinese, Malay, and Asian-Indian participants of the Singapore National Health Surveys, which included anthropometric measures and an oral glucose tolerance test. Receiver operating characteristic curve analyses were used with calculation of the area under the curve (AUC) to evaluate the performance of body mass index (BMI), waist circumference (WC), waist-to-hip ratio (WHR), and waist-to-height ratio (WHTR) for the identification of UDM. BMI performed significantly worse (AUCMEN  = 0.70; AUCWOMEN  = 0.75) than abdominal measures, whereas WHTR (AUCMEN  = 0.76; AUCWOMEN  = 0.79) was among the best performing measures in both sexes and all ethnic groups. Anthropometric measures performed better in Chinese than in Asian-Indian participants for the identification of UDM. A WHTR cutoff of 0.52 appeared optimal with a sensitivity of 76% in men and 73% in women and a specificity of 63% in men and 70% in women. Although ethnic differences were observed in the performance of anthropometric measures for the identification of UDM, abdominal adiposity measures generally performed better than BMI, and WHTR performed best in all Asian ethnic groups. © 2016 The Obesity Society.

  14. Optimal Operation and Dispatch of Voltage Regulation Devices Considering High Penetrations of Distributed Photovoltaic Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hodge, Brian S; Cho, Gyu-Jung

    Voltage regulation devices have been traditionally installed and utilized to support distribution voltages. Installations of distributed energy resources (DERs) in distribution systems are rapidly increasing, and many of these generation resources have variable and uncertain power output. These generators can significantly change the voltage profile for a feeder; therefore, in the distribution system planning stage of the optimal operation and dispatch of voltage regulation devices, possible high penetrations of DERs should be considered. In this paper, we model the IEEE 34-bus test feeder, including all essential equipment. An optimization method is adopted to determine the optimal siting and operation ofmore » the voltage regulation devices in the presence of distributed solar power generation. Finally, we verify the optimal configuration of the entire system through the optimization and simulation results.« less

  15. Scenario based optimization of a container vessel with respect to its projected operating conditions

    NASA Astrophysics Data System (ADS)

    Wagner, Jonas; Binkowski, Eva; Bronsart, Robert

    2014-06-01

    In this paper the scenario based optimization of the bulbous bow of the KRISO Container Ship (KCS) is presented. The optimization of the parametrically modeled vessel is based on a statistically developed operational profile generated from noon-to-noon reports of a comparable 3600 TEU container vessel and specific development functions representing the growth of global economy during the vessels service time. In order to consider uncertainties, statistical fluctuations are added. An analysis of these data lead to a number of most probable upcoming operating conditions (OC) the vessel will stay in the future. According to their respective likeliness an objective function for the evaluation of the optimal design variant of the vessel is derived and implemented within the parametrical optimization workbench FRIENDSHIP Framework. In the following this evaluation is done with respect to vessel's calculated effective power based on the usage of potential flow code. The evaluation shows, that the usage of scenarios within the optimization process has a strong influence on the hull form.

  16. Optimization of fuel-cell tram operation based on two dimension dynamic programming

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbin; Lu, Xuecheng; Zhao, Jingsong; Li, Jianqiu

    2018-02-01

    This paper proposes an optimal control strategy based on the two-dimension dynamic programming (2DDP) algorithm targeting at minimizing operation energy consumption for a fuel-cell tram. The energy consumption model with the tram dynamics is firstly deduced. Optimal control problem are analyzed and the 2DDP strategy is applied to solve the problem. The optimal tram speed profiles are obtained for each interstation which consist of three stages: accelerate to the set speed with the maximum traction power, dynamically adjust to maintain a uniform speed and decelerate to zero speed with the maximum braking power at a suitable timing. The optimal control curves of all the interstations are connected with the parking time to form the optimal control method of the whole line. The optimized speed profiles are also simplified for drivers to follow.

  17. Modeling and Optimization of Coordinative Operation of Hydro-wind-photovoltaic Considering Power Generation and Output Fluctuation

    NASA Astrophysics Data System (ADS)

    Wang, Xianxun; Mei, Yadong

    2017-04-01

    Coordinative operation of hydro-wind-photovoltaic is the solution of mitigating the conflict of power generation and output fluctuation of new energy and conquering the bottleneck of new energy development. Due to the deficiencies of characterizing output fluctuation, depicting grid construction and disposal of power abandon, the research of coordinative mechanism is influenced. In this paper, the multi-object and multi-hierarchy model of coordinative operation of hydro-wind-photovoltaic is built with the aim of maximizing power generation and minimizing output fluctuation and the constraints of topotaxy of power grid and balanced disposal of power abandon. In the case study, the comparison of uncoordinative and coordinative operation is carried out with the perspectives of power generation, power abandon and output fluctuation. By comparison from power generation, power abandon and output fluctuation between separate operation and coordinative operation of multi-power, the coordinative mechanism is studied. Compared with running solely, coordinative operation of hydro-wind-photovoltaic can gain the compensation benefits. Peak-alternation operation reduces the power abandon significantly and maximizes resource utilization effectively by compensating regulation of hydropower. The Pareto frontier of power generation and output fluctuation is obtained through multiple-objective optimization. It clarifies the relationship of mutual influence between these two objects. When coordinative operation is taken, output fluctuation can be markedly reduced at the cost of a slight decline of power generation. The power abandon also drops sharply compared with operating separately. Applying multi-objective optimization method to optimize the coordinate operation, Pareto optimal solution set of power generation and output fluctuation is achieved.

  18. Development of a fixed bed gasifier model and optimal operating conditions determination

    NASA Astrophysics Data System (ADS)

    Dahmani, Manel; Périlhon, Christelle; Marvillet, Christophe; Hajjaji, Noureddine; Houas, Ammar; Khila, Zouhour

    2017-02-01

    The main objective of this study was to develop a fixed bed gasifier model of palm waste and to identify the optimal operating conditions to produce electricity from synthesis gas. First, the gasifier was simulated using Aspen PlusTM software. Gasification is a thermo-chemical process that has long been used, but it remains a perfectible technology. It means incomplete combustion of biomass solid fuel into synthesis gas through partial oxidation. The operating parameters (temperature and equivalence ratio (ER)) were thereafter varied to investigate their effect on the synthesis gas composition and to provide guidance for future research and development efforts in process design. The equivalence ratio is defined as the ratio of the amount of air actually supplied to the gasifier and the stoichiometric amount of air. Increasing ER decreases the production of CO and H2 and increases the production of CO2 and H2O while an increase in temperature increases the fraction of CO and H2. The results show that the optimum temperature to have a syngas able to be effectively used for power generation is 900°C and the optimum equivalence ratio is 0.1.

  19. A robust two-stage design identifying the optimal biological dose for phase I/II clinical trials.

    PubMed

    Zang, Yong; Lee, J Jack

    2017-01-15

    We propose a robust two-stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet-multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three-dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose-efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose-toxicity and dose-efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    PubMed

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure

  1. How does network design constrain optimal operation of intermittent water supply?

    NASA Astrophysics Data System (ADS)

    Lieb, Anna; Wilkening, Jon; Rycroft, Chris

    2015-11-01

    Urban water distribution systems do not always supply water continuously or reliably. As pipes fill and empty, pressure transients may contribute to degraded infrastructure and poor water quality. To help understand and manage this undesirable side effect of intermittent water supply--a phenomenon affecting hundreds of millions of people in cities around the world--we study the relative contributions of fixed versus dynamic properties of the network. Using a dynamical model of unsteady transition pipe flow, we study how different elements of network design, such as network geometry, pipe material, and pipe slope, contribute to undesirable pressure transients. Using an optimization framework, we then investigate to what extent network operation decisions such as supply timing and inflow rate may mitigate these effects. We characterize some aspects of network design that make them more or less amenable to operational optimization.

  2. Defining a region of optimization based on engine usage data

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-08-04

    Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.

  3. Optimizing Dispersed Air Operations: A Concept To Use Highways As Improved Airfields In A Contested Environment

    DTIC Science & Technology

    2015-04-01

    capability to conduct airfield surveys outside of a permissive environment. Optimizing the Rapid Raptor Forward Arming and Refueling Point (FARP...9] An Initial Approach at Dispersing Air Operations: Rapid Raptor Concept ................... [12] Rapid Raptor : Optimized...Approach at Dispersing Air Operations: Rapid Raptor Concept The Air Force Rapid Raptor Fighter Forward Arming and Refueling (FARP) concept is an

  4. Optimization of Maneuver Execution for Landsat-7 Routine Operations

    NASA Technical Reports Server (NTRS)

    Cox, E. Lucien, Jr.; Bauer, Frank H. (Technical Monitor)

    2000-01-01

    Multiple mission constraints were satisfied during a lengthy, strategic ascent phase. Once routine operations begin, the ongoing concern of maintaining mission requirements becomes an immediate priority. The Landsat-7 mission has tight longitude control box and Earth imaging that requires sub-satellite descending nodal equator crossing times to occur in a narrow 30minute range fifteen (15) times daily. Operationally, spacecraft maneuvers must'be executed properly to maintain mission requirements. The paper will discuss the importance of optimizing the altitude raising and plane change maneuvers, amidst known constraints, to satisfy requirements throughout mission lifetime. Emphasis will be placed not only on maneuver size and frequency but also on changes in orbital elements that impact maneuver execution decisions. Any associated trade-off arising from operations contingencies will be discussed as well. Results of actual altitude and plane change maneuvers are presented to clarify actions taken.

  5. Optimizing Water Use and Hydropower Production in Operational Reservoir System Scheduling with RiverWare

    NASA Astrophysics Data System (ADS)

    Magee, T. M.; Zagona, E. A.

    2017-12-01

    Practical operational optimization of multipurpose reservoir systems is challenging for several reasons. Each purpose has its own constraints which may conflict with those of other purposes. While hydropower generation typically provides the bulk of the revenue, it is also among the lowest priority purposes. Each river system has important details that are specific to the location such as hydrology, reservoir storage capacity, physical limitations, bottlenecks, and the continuing evolution of operational policy. In addition, reservoir operations models include discrete, nonlinear, and nonconvex physical processes and if-then operating policies. Typically, the forecast horizon for scheduling needs to be extended far into the future to avoid near term (e.g., a few hours or a day) scheduling decisions that result in undesirable future states; this makes the computational effort much larger than may be expected. Put together, these challenges lead to large and customized mathematical optimization problems which must be solved efficiently to be of practical use. In addition, the solution process must be robust in an operational setting. We discuss a unique modeling approach in RiverWare that meets these challenges in an operational setting. The approach combines a Preemptive Linear Goal Programming optimization model to handle prioritized policies complimented by preprocessing and postprocessing with Rulebased Simulation to improve the solution with regard to nonlinearities, discrete issues, and if-then logic. An interactive policy language with a graphical user interface allows modelers to customize both the optimization and simulation based on the unique aspects of the policy for their system while the routine physical aspect of operations are modeled automatically. The modeler is aided by a set of compiled predefined functions and functions shared by other modelers. We illustrate the success of the approach with examples from daily use at the Tennessee Valley

  6. Optimal operating rules definition in complex water resource systems combining fuzzy logic, expert criteria and stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2016-04-01

    This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to

  7. Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system

    NASA Astrophysics Data System (ADS)

    Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei

    2017-08-01

    The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.

  8. Performing a scatterv operation on a hierarchical tree network optimized for collective operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D

    Performing a scatterv operation on a hierarchical tree network optimized for collective operations including receiving, by the scatterv module installed on the node, from a nearest neighbor parent above the node a chunk of data having at least a portion of data for the node; maintaining, by the scatterv module installed on the node, the portion of the data for the node; determining, by the scatterv module installed on the node, whether any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child; andmore » sending, by the scatterv module installed on the node, those portions of data to the nearest neighbor child if any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child.« less

  9. An intelligent factory-wide optimal operation system for continuous production process

    NASA Astrophysics Data System (ADS)

    Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping

    2016-03-01

    In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.

  10. Operationally optimal maneuver strategy for spacecraft injected into sub-geosynchronous transfer orbit

    NASA Astrophysics Data System (ADS)

    Kiran, B. S.; Singh, Satyendra; Negi, Kuldeep

    The GSAT-12 spacecraft is providing Communication services from the INSAT/GSAT system in the Indian region. The spacecraft carries 12 extended C-band transponders. GSAT-12 was launched by ISRO’s PSLV from Sriharikota, into a sub-geosynchronous Transfer Orbit (sub-GTO) of 284 x 21000 km with inclination 18 deg. This Mission successfully accomplished combined optimization of launch vehicle and satellite capabilities to maximize operational life of the s/c. This paper describes mission analysis carried out for GSAT-12 comprising launch window, orbital events study and orbit raising maneuver strategies considering various Mission operational constraints. GSAT-12 is equipped with two earth sensors (ES), three gyroscopes and digital sun sensor. The launch window was generated considering mission requirement of minimum 45 minutes of ES data for calibration of gyros with Roll-sun-pointing orientation in T.O. Since the T.O. period was a rather short 6.1 hr, required pitch biases were worked out to meet the gyro-calibration requirement. A 440 N Liquid Apogee Motor (LAM) is used for orbit raising. The objective of the maneuver strategy is to achieve desired drift orbit satisfying mission constraints and minimizing propellant expenditure. In case of sub-GTO, the optimal strategy is to first perform an in-plane maneuver at perigee to raise the apogee to synchronous level and then perform combined maneuvers at the synchronous apogee to achieve desired drift orbit. The perigee burn opportunities were examined considering ground station visibility requirement for monitoring the burn. Two maneuver strategies were proposed: an optimal five-burn strategy with two perigee burns centered around perigee#5 and perigee#8 with partial ground station visibility and three apogee burns with dual station visibility, a near-optimal five-burn strategy with two off-perigee burns at perigee#5 and perigee#8 with single ground station visibility and three apogee burns with dual station visibility

  11. Automatic threshold optimization in nonlinear energy operator based spike detection.

    PubMed

    Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M

    2016-08-01

    In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.

  12. Discrete particle swarm optimization for identifying community structures in signed social networks.

    PubMed

    Cai, Qing; Gong, Maoguo; Shen, Bo; Ma, Lijia; Jiao, Licheng

    2014-10-01

    Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  14. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  15. Inductive flux usage and its optimization in tokamak operation

    DOE PAGES

    Luce, Timothy C.; Humphreys, David A.; Jackson, Gary L.; ...

    2014-07-30

    The energy flow from the poloidal field coils of a tokamak to the electromagnetic and kinetic stored energy of the plasma are considered in the context of optimizing the operation of ITER. The goal is to optimize the flux usage in order to allow the longest possible burn in ITER at the desired conditions to meet the physics objectives (500 MW fusion power with energy gain of 10). A mathematical formulation of the energy flow is derived and applied to experiments in the DIII-D tokamak that simulate the ITER design shape and relevant normalized current and pressure. The rate ofmore » rise of the plasma current was varied, and the fastest stable current rise is found to be the optimum for flux usage in DIII-D. A method to project the results to ITER is formulated. The constraints of the ITER poloidal field coil set yield an optimum at ramp rates slower than the maximum stable rate for plasmas similar to the DIII-D plasmas. Finally, experiments in present-day tokamaks for further optimization of the current rise and validation of the projections are suggested.« less

  16. Optimization of identity operation in NMR spectroscopy via genetic algorithm: Application to the TEDOR experiment

    NASA Astrophysics Data System (ADS)

    Manu, V. S.; Veglia, Gianluigi

    2016-12-01

    Identity operation in the form of π pulses is widely used in NMR spectroscopy. For an isolated single spin system, a sequence of even number of π pulses performs an identity operation, leaving the spin state essentially unaltered. For multi-spin systems, trains of π pulses with appropriate phases and time delays modulate the spin Hamiltonian to perform operations such as decoupling and recoupling. However, experimental imperfections often jeopardize the outcome, leading to severe losses in sensitivity. Here, we demonstrate that a newly designed Genetic Algorithm (GA) is able to optimize a train of π pulses, resulting in a robust identity operation. As proof-of-concept, we optimized the recoupling sequence in the transferred-echo double-resonance (TEDOR) pulse sequence, a key experiment in biological magic angle spinning (MAS) solid-state NMR for measuring multiple carbon-nitrogen distances. The GA modified TEDOR (GMO-TEDOR) experiment with improved recoupling efficiency results in a net gain of sensitivity up to 28% as tested on a uniformly 13C, 15N labeled microcrystalline ubiquitin sample. The robust identity operation achieved via GA paves the way for the optimization of several other pulse sequences used for both solid- and liquid-state NMR used for decoupling, recoupling, and relaxation experiments.

  17. Optimal Operation of a Josephson Parametric Amplifier for Vacuum Squeezing

    NASA Astrophysics Data System (ADS)

    Malnou, M.; Palken, D. A.; Vale, Leila R.; Hilton, Gene C.; Lehnert, K. W.

    2018-04-01

    A Josephson parametric amplifier (JPA) can create squeezed states of microwave light, lowering the noise associated with certain quantum measurements. We experimentally study how the JPA's pump influences the phase-sensitive amplification and deamplification of a coherent tone's amplitude when that amplitude is commensurate with vacuum fluctuations. We predict and demonstrate that, by operating the JPA with a single current pump whose power is greater than the value that maximizes gain, the amplifier distortion is reduced and, consequently, squeezing is improved. Optimizing the singly pumped JPA's operation in this fashion, we directly observe 3.87 ±0.03 dB of vacuum squeezing over a bandwidth of 30 MHz.

  18. Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints

    NASA Astrophysics Data System (ADS)

    Dongnan, L.; Wan, W.; Zhao, J.

    2017-12-01

    Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.

  19. An Optimal Mean Based Block Robust Feature Extraction Method to Identify Colorectal Cancer Genes with Integrated Data.

    PubMed

    Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui

    2017-08-17

    It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.

  20. [Optimal acupoint combination of transcutaneous electrical stimulation in artificial abortion operation].

    PubMed

    Chen, Chong; Xie, Wenxia; Wang, Zedong; Zeng, Linchai; Liu, Pei; Li, Chunli

    2017-02-12

    To observe the clinical effects on analgesia, tranquilization and prevention of abortion syndrome of artificial abortion operation treated with transcutaneous electrical acupoint stimulation (TEAS) with different acupoint combination and explore the optimal acupoint combination of TEAS in artificial abortion operation. Two hundred patients intended to artificial operation were randomized into No.1 group[Sanyinjiao (SP 6) + Zusanli (ST 36)], No.2 group[Sanyinjiao (SP 6) + Diji (SP 8)], No.3 group[Sanyinjiao (SP 6) + Taichong (LR 3)], No.4 group (cervical blockage anesthesia with lidocaine) and No.5 group (blank group, without any analgesia measure applied), 40 cases in each one. In the No.1, No.2 and No.3 groups, Sanyinjiao (SP 6) was the main acupoint, combined with Zusanli (ST 36), Dijin (SP 8) and Taichong (LR 3) respectively. TEAS was given 30 min before the operation till the end of operation. Mean arterial pressure, heart rate and oxygen saturation during operation, as well as bleeding amount were observed in the five groups. The visual analogue scale (VAS) score was observed during and 30 min after operation, and Ramsay score was observed during operation. Cervical relaxation degree and the incidence of artificial abortion syndrome were recorded. For VAS score during and 30 min after operation and Ramsay score during operation, the differences were significant statistically in the No. 1, No.2, No.3 and No.4 groups as compared with the No.5 group ( P <0.01, P <0.05). The results in the No.2 group were better than those in the No.1, No.3 and No.4 groups (all P <0.05). For cervical relaxationdegree, the result in the No.2 group was better than that in each of the rest groups ( P <0.01, P <0.05). For artificial abortion syndrome, the incidences in the No.2 and No.3 groups were lower than those in the No.4 and No.5 groups (all P <0.05). For bleeding amount and hemodynamic changes, the differences were not significant statistically among the five groups (all P >0

  1. Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space

    NASA Astrophysics Data System (ADS)

    Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.

    2015-09-01

    Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their

  2. Decision support for operations and maintenance (DSOM) system

    DOEpatents

    Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA

    2006-03-21

    A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.

  3. Consumer-identified barriers and strategies for optimizing technology use in the workplace.

    PubMed

    De Jonge, Desleigh M; Rodger, Sylvia A

    2006-01-01

    This article explores the experiences of 26 assistive technology (AT) users having a range of physical impairments as they optimized their use of technology in the workplace. A qualitative research design was employed using in-depth, open-ended interviews and observations of AT users in the workplace. Participants identified many factors that limited their use of technology such as discomfort and pain, limited knowledge of the technology's features, and the complexity of the technology. The amount of time required for training, limited work time available for mastery, cost of training and limitations of the training provided, resulted in an over-reliance on trial and error and informal support networks and a sense of isolation. AT users enhanced their use of technology by addressing the ergonomics of the workstation and customizing the technology to address individual needs and strategies. Other key strategies included tailored training and learning support as well as opportunities to practice using the technology and explore its features away from work demands. This research identified structures important for effective AT use in the workplace which need to be put in place to ensure that AT users are able to master and optimize their use of technology.

  4. Determining optimal operation parameters for reducing PCDD/F emissions (I-TEQ values) from the iron ore sintering process by using the Taguchi experimental design.

    PubMed

    Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh

    2008-07-15

    This study is the first one using the Taguchi experimental design to identify the optimal operating condition for reducing polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/ Fs) formations during the iron ore sintering process. Four operating parameters, including the water content (Wc; range = 6.0-7.0 wt %), suction pressure (Ps; range = 1000-1400 mmH2O), bed height (Hb; range = 500-600 mm), and type of hearth layer (including sinter, hematite, and limonite), were selected for conducting experiments in a pilot scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant We found that the resultant optimal combination (Wc = 6.5 wt%, Hb = 500 mm, Ps = 1000 mmH2O, and hearth layer = hematite) could decrease the emission factor of total PCDD/Fs (total EF(PCDD/Fs)) up to 62.8% by reference to the current operating condition of the real-scale sinter plant (Wc = 6.5 wt %, Hb = 550 mm, Ps = 1200 mmH2O, and hearth layer = sinter). Through the ANOVA analysis, we found that Wc was the most significant parameter in determining total EF(PCDD/Fs (accounting for 74.7% of the total contribution of the four selected parameters). The resultant optimal combination could also enhance slightly in both sinter productivity and sinter strength (30.3 t/m2/day and 72.4%, respectively) by reference to those obtained from the reference operating condition (29.9 t/m (2)/day and 72.2%, respectively). The above results further ensure the applicability of the obtained optimal combination for the real-scale sinter production without interfering its sinter productivity and sinter strength.

  5. Detector characterization, optimization, and operation for ACTPol

    NASA Astrophysics Data System (ADS)

    Grace, Emily Ann

    2016-01-01

    Measurements of the temperature anisotropies of the Cosmic Microwave Background (CMB) have provided the foundation for much of our current knowledge of cosmology. Observations of the polarization of the CMB have already begun to build on this foundation and promise to illuminate open cosmological questions regarding the first moments of the universe and the properties of dark energy. The primary CMB polarization signal contains the signature of early universe physics including the possible imprint of inflationary gravitational waves, while a secondary signal arises due to late-time interactions of CMB photons which encode information about the formation and evolution of structure in the universe. The Atacama Cosmology Telescope Polarimeter (ACTPol), located at an elevation of 5200 meters in Chile and currently in its third season of observing, is designed to probe these signals with measurements of the CMB in both temperature and polarization from arcminute to degree scales. To measure the faint CMB polarization signal, ACTPol employs large, kilo-pixel detector arrays of transition edge sensor (TES) bolometers, which are cooled to a 100 mK operating temperature with a dilution refrigerator. Three such arrays are currently deployed, two with sensitivity to 150 GHz radiation and one dichroic array with 90 GHz and 150 GHz sensitivity. The operation of these large, monolithic detector arrays presents a number of challenges for both assembly and characterization. This thesis describes the design and assembly of the ACTPol polarimeter arrays and outlines techniques for their rapid characterization. These methods are employed to optimize the design and operating conditions of the detectors, select wafers for deployment, and evaluate the baseline array performance. The results of the application of these techniques to wafers from all three ACTPol arrays is described, including discussion of the measured thermal properties and time constants. Finally, aspects of the

  6. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  7. Optimization and experimental validation of a thermal cycle that maximizes entropy coefficient fisher identifiability for lithium iron phosphate cells

    NASA Astrophysics Data System (ADS)

    Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam

    2016-03-01

    This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.

  8. A Concept and Implementation of Optimized Operations of Airport Surface Traffic

    NASA Technical Reports Server (NTRS)

    Jung, Yoon C.; Hoang, Ty; Montoya, Justin; Gupta, Gautam; Malik, Waqar; Tobias, Leonard

    2010-01-01

    This paper presents a new concept of optimized surface operations at busy airports to improve the efficiency of taxi operations, as well as reduce environmental impacts. The suggested system architecture consists of the integration of two decoupled optimization algorithms. The Spot Release Planner provides sequence and timing advisories to tower controllers for releasing departure aircraft into the movement area to reduce taxi delay while achieving maximum throughput. The Runway Scheduler provides take-off sequence and arrival runway crossing sequence to the controllers to maximize the runway usage. The description of a prototype implementation of this integrated decision support tool for the airport control tower controllers is also provided. The prototype decision support tool was evaluated through a human-in-the-loop experiment, where both the Spot Release Planner and Runway Scheduler provided advisories to the Ground and Local Controllers. Initial results indicate the average number of stops made by each departure aircraft in the departure runway queue was reduced by more than half when the controllers were using the advisories, which resulted in reduced taxi times in the departure queue.

  9. Cost related sensitivity analysis for optimal operation of a grid-parallel PEM fuel cell power plant

    NASA Astrophysics Data System (ADS)

    El-Sharkh, M. Y.; Tanrioven, M.; Rahman, A.; Alam, M. S.

    Fuel cell power plants (FCPP) as a combined source of heat, power and hydrogen (CHP&H) can be considered as a potential option to supply both thermal and electrical loads. Hydrogen produced from the FCPP can be stored for future use of the FCPP or can be sold for profit. In such a system, tariff rates for purchasing or selling electricity, the fuel cost for the FCPP/thermal load, and hydrogen selling price are the main factors that affect the operational strategy. This paper presents a hybrid evolutionary programming and Hill-Climbing based approach to evaluate the impact of change of the above mentioned cost parameters on the optimal operational strategy of the FCPP. The optimal operational strategy of the FCPP for different tariffs is achieved through the estimation of the following: hourly generated power, the amount of thermal power recovered, power trade with the local grid, and the quantity of hydrogen that can be produced. Results show the importance of optimizing system cost parameters in order to minimize overall operating cost.

  10. Towards Optimal Operation of the Reservoir System in Upper Yellow River: Incorporating Long- and Short-term Operations and Using Rolling Updated Hydrologic Forecast Information

    NASA Astrophysics Data System (ADS)

    Si, Y.; Li, X.; Li, T.; Huang, Y.; Yin, D.

    2016-12-01

    The cascade reservoirs in Upper Yellow River (UYR), one of the largest hydropower bases in China, play a vital role in peak load and frequency regulation for Northwest China Power Grid. The joint operation of this system has been put forward for years whereas has not come into effect due to management difficulties and inflow uncertainties, and thus there is still considerable improvement room for hydropower production. This study presents a decision support framework incorporating long- and short-term operation of the reservoir system. For long-term operation, we maximize hydropower production of the reservoir system using historical hydrological data of multiple years, and derive operating rule curves for storage reservoirs. For short-term operation, we develop a program consisting of three modules, namely hydrologic forecast module, reservoir operation module and coordination module. The coordination module is responsible for calling the hydrologic forecast module to acquire predicted inflow within a short-term horizon, and transferring the information to the reservoir operation module to generate optimal release decision. With the hydrologic forecast information updated, the rolling short-term optimization is iterated until the end of operation period, where the long-term operating curves serve as the ending storage target. As an application, the Digital Yellow River Integrated Model (referred to as "DYRIM", which is specially designed for runoff-sediment simulation in the Yellow River basin by Tsinghua University) is used in the hydrologic forecast module, and the successive linear programming (SLP) in the reservoir operation module. The application in the reservoir system of UYR demonstrates that the framework can effectively support real-time decision making, and ensure both computational accuracy and speed. Furthermore, it is worth noting that the general framework can be extended to any other reservoir system with any or combination of hydrological model

  11. Identifying Human Factors Issues in Aircraft Maintenance Operations

    NASA Technical Reports Server (NTRS)

    Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)

    1995-01-01

    Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.

  12. Optimizing Multi-Station Template Matching to Identify and Characterize Induced Seismicity in Ohio

    NASA Astrophysics Data System (ADS)

    Brudzinski, M. R.; Skoumal, R.; Currie, B. S.

    2014-12-01

    As oil and gas well completions utilizing multi-stage hydraulic fracturing have become more commonplace, the potential for seismicity induced by the deep disposal of frac-related flowback waters and the hydraulic fracturing process itself has become increasingly important. While it is rare for these processes to induce felt seismicity, the recent increase in the number of deep injection wells and volumes injected have been suspected to have contributed to a substantial increase of events = M 3 in the continental U.S. over the past decade. Earthquake template matching using multi-station waveform cross-correlation is an adept tool for investigating potentially induced sequences due to its proficiency at identifying similar/repeating seismic events. We have sought to refine this approach by investigating a variety of seismic sequences and determining the optimal parameters (station combinations, template lengths and offsets, filter frequencies, data access method, etc.) for identifying induced seismicity. When applied to a sequence near a wastewater injection well in Youngstown, Ohio, our optimized template matching routine yielded 566 events while other template matching studies found ~100-200 events. We also identified 77 events on 4-12 March 2014 that are temporally and spatially correlated with active hydraulic fracturing in Poland Township, Ohio. We find similar improvement in characterizing sequences in Washington and Harrison Counties, which appear to be related to wastewater injection and hydraulic fracturing, respectively. In the Youngstown and Poland Township cases, focal mechanisms and double difference relocation using the cross-correlation matrix finds left-lateral faults striking roughly east-west near the top of the basement. We have also used template matching to determine isolated earthquakes near several other wastewater injection wells are unlikely to be induced based on a lack of similar/repeating sequences. Optimized template matching utilizes

  13. Applications of Optimal Building Energy System Selection and Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marnay, Chris; Stadler, Michael; Siddiqui, Afzal

    2011-04-01

    Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated bymore » description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.« less

  14. Development of a protocol to optimize electric power consumption and life cycle environmental impacts for operation of wastewater treatment plant.

    PubMed

    Piao, Wenhua; Kim, Changwon; Cho, Sunja; Kim, Hyosoo; Kim, Minsoo; Kim, Yejin

    2016-12-01

    In wastewater treatment plants (WWTPs), the portion of operating costs related to electric power consumption is increasing. If the electric power consumption decreased, however, it would be difficult to comply with the effluent water quality requirements. A protocol was proposed to minimize the environmental impacts as well as to optimize the electric power consumption under the conditions needed to meet the effluent water quality standards in this study. This protocol was comprised of six phases of procedure and was tested using operating data from S-WWTP to prove its applicability. The 11 major operating variables were categorized into three groups using principal component analysis and K-mean cluster analysis. Life cycle assessment (LCA) was conducted for each group to deduce the optimal operating conditions for each operating state. Then, employing mathematical modeling, six improvement plans to reduce electric power consumption were deduced. The electric power consumptions for suggested plans were estimated using an artificial neural network. This was followed by a second round of LCA conducted on the plans. As a result, a set of optimized improvement plans were derived for each group that were able to optimize the electric power consumption and life cycle environmental impact, at the same time. Based on these test results, the WWTP operating management protocol presented in this study is deemed able to suggest optimal operating conditions under which power consumption can be optimized with minimal life cycle environmental impact, while allowing the plant to meet water quality requirements.

  15. A Rapid Method for Optimizing Running Temperature of Electrophoresis through Repetitive On-Chip CE Operations

    PubMed Central

    Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo

    2011-01-01

    In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077

  16. Solving fractional optimal control problems within a Chebyshev-Legendre operational technique

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.

    2017-06-01

    In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.

  17. The Optimal Timing of Stage-2-Palliation After the Norwood Operation.

    PubMed

    Meza, James M; Hickey, Edward; McCrindle, Brian; Blackstone, Eugene; Anderson, Brett; Overman, David; Kirklin, James K; Karamlou, Tara; Caldarone, Christopher; Kim, Richard; DeCampli, William; Jacobs, Marshall; Guleserian, Kristine; Jacobs, Jeffrey P; Jaquiss, Robert

    2018-01-01

    The effect of the timing of stage-2-palliation (S2P) on survival through single ventricle palliation remains unknown. This study investigated the optimal timing of S2P that minimizes pre-S2P attrition and maximizes post-S2P survival. The Congenital Heart Surgeons' Society's critical left ventricular outflow tract obstruction cohort was used. Survival analysis was performed using multiphase parametric hazard analysis. Separate risk factors for death after the Norwood and after S2P were identified. Based on the multivariable models, infants were stratified as low, intermediate, or high risk. Cumulative 2-year, post-Norwood survival was predicted. Optimal timing was determined using conditional survival analysis and plotted as 2-year, post-Norwood survival versus age at S2P. A Norwood operation was performed in 534 neonates from 21 institutions. The S2P was performed in 71%, at a median age of 5.1 months (IQR: 4.3 to 6.0), and 22% died after Norwood. By 5 years after S2P, 10% of infants had died. For low- and intermediate-risk infants, performing S2P after age 3 months was associated with 89% ± 3% and 82% ± 3% 2-year survival, respectively. Undergoing an interval cardiac reoperation or moderate-severe right ventricular dysfunction before S2P were high-risk features. Among high-risk infants, 2-year survival was 63% ± 5%, and even lower when S2P was performed before age 6 months. Performing S2P after age 3 months may optimize survival of low- and intermediate-risk infants. High-risk infants are unlikely to complete three-stage palliation, and early S2P may increase their risk of mortality. We infer that early referral for cardiac transplantation may increase their chance of survival. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Optimizing the Long-Term Operating Plan of Railway Marshalling Station for Capacity Utilization Analysis

    PubMed Central

    Zhou, Wenliang; Yang, Xia; Deng, Lianbo

    2014-01-01

    Not only is the operating plan the basis of organizing marshalling station's operation, but it is also used to analyze in detail the capacity utilization of each facility in marshalling station. In this paper, a long-term operating plan is optimized mainly for capacity utilization analysis. Firstly, a model is developed to minimize railcars' average staying time with the constraints of minimum time intervals, marshalling track capacity, and so forth. Secondly, an algorithm is designed to solve this model based on genetic algorithm (GA) and simulation method. It divides the plan of whole planning horizon into many subplans, and optimizes them with GA one by one in order to obtain a satisfactory plan with less computing time. Finally, some numeric examples are constructed to analyze (1) the convergence of the algorithm, (2) the effect of some algorithm parameters, and (3) the influence of arrival train flow on the algorithm. PMID:25525614

  19. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    NASA Astrophysics Data System (ADS)

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.

    2016-09-01

    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  20. Orthogonal optimization of a water hydraulic pilot-operated pressure-reducing valve

    NASA Astrophysics Data System (ADS)

    Mao, Xuyao; Wu, Chao; Li, Bin; Wu, Di

    2017-12-01

    In order to optimize the comprehensive characteristics of a water hydraulic pilot-operated pressure-reducing valve, numerical orthogonal experimental design was adopted. Six parameters of the valve, containing diameters of damping plugs, volume of spring chamber, half cone angle of main spool, half cone angle of pilot spool, mass of main spool and diameter of main spool, were selected as the orthogonal factors, and each factor has five different levels. An index of flowrate stability, pressure stability and pressure overstrike stability (iFPOS) was used to judge the merit of each orthogonal attempt. Embedded orthogonal process turned up and a final optimal combination of these parameters was obtained after totally 50 numerical orthogonal experiments. iFPOS could be low to a fairly low value which meant that the valve could have much better stabilities. During the optimization, it was also found the diameters of damping plugs and main spool played important roles in stability characteristics of the valve.

  1. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Operationally efficient propulsion system study (OEPSS) data book. Volume 6; Space Transfer Propulsion Operational Efficiency Study Task of OEPSS

    NASA Technical Reports Server (NTRS)

    Harmon, Timothy J.

    1992-01-01

    This document is the final report for the Space Transfer Propulsion Operational Efficiency Study Task of the Operationally Efficient Propulsion System Study (OEPSS) conducted by the Rocketdyne Division of Rockwell International. This Study task studied, evaluated and identified design concepts and technologies which minimized launch and in-space operations and optimized in-space vehicle propulsion system operability.

  3. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE PAGES

    Li, Mingjie; Zhou, Ping; Wang, Hong; ...

    2017-09-19

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  4. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mingjie; Zhou, Ping; Wang, Hong

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  5. Optimization of operation conditions for the startup of aerobic granular sludge reactors biologically removing carbon, nitrogen, and phosphorous.

    PubMed

    Lochmatter, Samuel; Holliger, Christof

    2014-08-01

    The transformation of conventional flocculent sludge to aerobic granular sludge (AGS) biologically removing carbon, nitrogen and phosphorus (COD, N, P) is still a main challenge in startup of AGS sequencing batch reactors (AGS-SBRs). On the one hand a rapid granulation is desired, on the other hand good biological nutrient removal capacities have to be maintained. So far, several operation parameters have been studied separately, which makes it difficult to compare their impacts. We investigated seven operation parameters in parallel by applying a Plackett-Burman experimental design approach with the aim to propose an optimized startup strategy. Five out of the seven tested parameters had a significant impact on the startup duration. The conditions identified to allow a rapid startup of AGS-SBRs with good nutrient removal performances were (i) alternation of high and low dissolved oxygen phases during aeration, (ii) a settling strategy avoiding too high biomass washout during the first weeks of reactor operation, (iii) adaptation of the contaminant load in the early stage of the startup in order to ensure that all soluble COD was consumed before the beginning of the aeration phase, (iv) a temperature of 20 °C, and (v) a neutral pH. Under such conditions, it took less than 30 days to produce granular sludge with high removal performances for COD, N, and P. A control run using this optimized startup strategy produced again AGS with good nutrient removal performances within four weeks and the system was stable during the additional operation period of more than 50 days. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. USASOC Injury Prevention/Performance Optimization Musculoskeletal Screening Initiative

    DTIC Science & Technology

    2011-11-01

    Tactical Human Optimization , Rapid Rehabilitation , and Reconditioning (THOR3) program to identify the...Special Operations Command (USASOC) to support development of USASOC’s Tactical Human Optimization , Rapid Rehabilitation , and Reconditioning (THOR3...biomechanical, musculoskeletal, physiological, tactical , and injury data and refine its current human performance program to address the

  7. Active Learning with Rationales for Identifying Operationally Significant Anomalies in Aviation

    NASA Technical Reports Server (NTRS)

    Sharma, Manali; Das, Kamalika; Bilgic, Mustafa; Matthews, Bryan; Nielsen, David Lynn; Oza, Nikunj C.

    2016-01-01

    A major focus of the commercial aviation community is discovery of unknown safety events in flight operations data. Data-driven unsupervised anomaly detection methods are better at capturing unknown safety events compared to rule-based methods which only look for known violations. However, not all statistical anomalies that are discovered by these unsupervised anomaly detection methods are operationally significant (e.g., represent a safety concern). Subject Matter Experts (SMEs) have to spend significant time reviewing these statistical anomalies individually to identify a few operationally significant ones. In this paper we propose an active learning algorithm that incorporates SME feedback in the form of rationales to build a classifier that can distinguish between uninteresting and operationally significant anomalies. Experimental evaluation on real aviation data shows that our approach improves detection of operationally significant events by as much as 75% compared to the state-of-the-art. The learnt classifier also generalizes well to additional validation data sets.

  8. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  9. Optimization of operating parameters in polysilicon chemical vapor deposition reactor with response surface methodology

    NASA Astrophysics Data System (ADS)

    An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen

    2018-05-01

    In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.

  10. Development of a Standardized Cranial Phantom for Training and Optimization of Functional Stereotactic Operations.

    PubMed

    Krüger, Marie T; Coenen, Volker A; Egger, Karl; Shah, Mukesch; Reinacher, Peter C

    2018-06-13

    In recent years, simulations based on phantom models have become increasingly popular in the medical field. In the field of functional and stereotactic neurosurgery, a cranial phantom would be useful to train operative techniques, such as stereo-electroencephalography (SEEG), to establish new methods as well as to develop and modify radiological techniques. In this study, we describe the construction of a cranial phantom and show examples for it in stereotactic and functional neurosurgery and its applicability with different radiological modalities. We prepared a plaster skull filled with agar. A complete operation for deep brain stimulation (DBS) was simulated using directional leads. Moreover, a complete SEEG operation including planning, implantation of the electrodes, and intraoperative and postoperative imaging was simulated. An optimally customized cranial phantom is filled with 10% agar. At 7°C, it can be stored for approximately 4 months. A DBS and an SEEG procedure could be realistically simulated. Lead artifacts can be studied in CT, X-ray, rotational fluoroscopy, and MRI. This cranial phantom is a simple and effective model to simulate functional and stereotactic neurosurgical operations. This might be useful for teaching and training of neurosurgeons, establishing operations in a new center and for optimization of radiological examinations. © 2018 S. Karger AG, Basel.

  11. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    NASA Astrophysics Data System (ADS)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  12. Physical constraints, fundamental limits, and optimal locus of operating points for an inverted pendulum based actuated dynamic walker.

    PubMed

    Patnaik, Lalit; Umanand, Loganathan

    2015-10-26

    The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.

  13. Cryogenic Eyesafer Laser Optimization for Use Without Liquid Nitrogen

    DTIC Science & Technology

    2014-02-01

    liquid cryogens. This calls for optimal performance around 125–150 K—high enough for reasonably efficient operation of a Stirling cooler. We...state laser system with an optimum operating temperature somewhat higher—ideally 125–150 K—can be identified, then a Stirling cooler can be used to...needed to optimize laser performance in the desired temperature range. This did not include actual use of Stirling coolers, but rather involved both

  14. Optimization of the operating conditions of gas-turbine power stations considering the effect of equipment deterioration

    NASA Astrophysics Data System (ADS)

    Aminov, R. Z.; Kozhevnikov, A. I.

    2017-10-01

    In recent years in most power systems all over the world, a trend towards the growing nonuniformity of energy consumption and generation schedules has been observed. The increase in the portion of renewable energy sources is one of the important challenges for many countries. The ill-predictable character of such energy sources necessitates a search for practical solutions. Presently, the most efficient method for compensating for nonuniform generation of the electric power by the renewable energy sources—predominantly by the wind and solar energy—is generation of power at conventional fossil-fuel-fired power stations. In Russia, this problem is caused by the increasing portion in the generating capacity structure of the nuclear power stations, which are most efficient when operating under basic conditions. Introduction of hydropower and pumped storage hydroelectric power plants and other energy-storage technologies does not cover the demand for load-following power capacities. Owing to a simple design, low construction costs, and a sufficiently high economic efficiency, gas turbine plants (GTPs) prove to be the most suitable for covering the nonuniform electric-demand schedules. However, when the gas turbines are operated under varying duty conditions, the lifetime of the primary thermostressed components is considerably reduced and, consequently, the repair costs increase. A method is proposed for determination of the total operating costs considering the deterioration of the gas turbine equipment under varying duty and start-stop conditions. A methodology for optimization of the loading modes for the gas turbine equipment is developed. The consideration of the lifetime component allows varying the optimal operating conditions and, in some cases, rejecting short-time stops of the gas turbine plants. The calculations performed in a wide range of varying fuel prices and capital investments per gas turbine equipment unit show that the economic effectiveness can

  15. Optimizing Operational Physical Fitness (Optimisation de L’Aptitude Physique Operationnelle)

    DTIC Science & Technology

    2009-01-01

    NORTH ATLANTIC TREATY ORGANISATION RESEARCH AND TECHNOLOGY ORGANISATION AC/323(HFM-080)TP/200 www.rto.nato.int RTO TECHNICAL REPORT TR... RESEARCH AND TECHNOLOGY ORGANISATION AC/323(HFM-080)TP/200 www.rto.nato.int RTO TECHNICAL REPORT TR-HFM-080 Optimizing Operational Physical...Fitness (Optimisation de l’aptitude physique opérationnelle) Final Report of Task Group 019. ii RTO-TR-HFM-080 The Research and

  16. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  17. Teaching operating room conflict management to surgeons: clarifying the optimal approach.

    PubMed

    Rogers, David; Lingard, Lorelei; Boehler, Margaret L; Espin, Sherry; Klingensmith, Mary; Mellinger, John D; Schindler, Nancy

    2011-09-01

    Conflict management has been identified as an essential competence for surgeons as they work in operating room (OR) teams; however, the optimal approach is unclear. Social science research offers two alternatives, the first of which recommends that task-related conflict be managed using problem-solving techniques while avoiding relationship conflict. The other approach advocates for the active management of relationship conflict as it almost always accompanies task-related conflict. Clarity about the optimal management strategy can be gained through a better understanding of conflict transformation, or the inter-relationship between conflict types, in this specific setting. The purpose of this study was to evaluate conflict transformation in OR teams in order to clarify the approach most appropriate for an educational conflict management programme for surgeons. A constructivist grounded theory approach was adopted to explore the phenomenon of OR team conflict. Narratives were collected from focus groups of OR nurses and surgeons at five participating centres. A subset of these narratives involved transformation between and within conflict types. This dataset was analysed. The results confirm that misattribution and the use of harsh language cause conflict transformation in OR teams just as they do in stable work teams. Negative emotionality was found to make a substantial contribution to responses to and consequences of conflict, notably in the swiftness with which individuals terminated their working relationships. These findings contribute to a theory of conflict transformation in the OR team. There are a number of behaviours that activate conflict transformation in the OR team and a conflict management education programme should include a description of and alternatives to these behaviours. The types of conflict are tightly interwoven in this setting and thus the most appropriate management strategy is one that assumes that both types of conflict will exist and

  18. Optimization of operation conditions and configurations for solid-propellant ducted rocket combustors

    NASA Astrophysics Data System (ADS)

    Onn, Shing-Chung; Chiang, Hau-Jei; Hwang, Hang-Che; Wei, Jen-Ko; Cherng, Dao-Lien

    1993-06-01

    The dynamic behavior of a 2D turbulent mixing and combustion process has been studied numerically in the main combustion chamber of a solid-propellant ducted rocket (SDR). The mathematical model is based on the Favre-averaged conservation equations developed by Cherng (1990). Combustion efficiency, rather than specific impulse from earlier studies, is applied successfully to optimize the effects of two parameters by a multiple linear regression model. Specifically, the fuel-air equivalence ratio of the operating conditions and the air inlet location of configurations for the SDR combustor have been studied. For a equivalence ratio near the stoichiometric condition, the use of specific impulse or combustion efficiency will show similar trend in characterizing the reacting flow field in the combustor. For the overall fuel lean operating conditions, the change of combustion efficiency is much more sensitive to that of air inlet location than specific impulse does, suggesting combustion efficiency a better property than specific impulse in representing the condition toward flammability limits. In addition, the air inlet for maximum efficiency, in general, appears to be located at downstream of that for highest specific impulse. The optimal case for the effects of two parameters occurs at fuel lean condition, which shows a larger recirculation zone in front, deeper penetration of ram air into the combustor and much larger high temperature zone near the centerline of the combustor exit than those shown in the optimal case for overall equivalence ratio close to stoichiometric.

  19. Optimization of automotive Rankine cycle waste heat recovery under various engine operating condition

    NASA Astrophysics Data System (ADS)

    Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi

    2017-02-01

    An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.

  20. Smart house-based optimal operation of thermal unit commitment for a smart grid considering transmission constraints

    NASA Astrophysics Data System (ADS)

    Howlader, Harun Or Rashid; Matayoshi, Hidehito; Noorzad, Ahmad Samim; Muarapaz, Cirio Celestino; Senjyu, Tomonobu

    2018-05-01

    This paper presents a smart house-based power system for thermal unit commitment programme. The proposed power system consists of smart houses, renewable energy plants and conventional thermal units. The transmission constraints are considered for the proposed system. The generated power of the large capacity renewable energy plant leads to the violated transmission constraints in the thermal unit commitment programme, therefore, the transmission constraint should be considered. This paper focuses on the optimal operation of the thermal units incorporated with controllable loads such as Electrical Vehicle and Heat Pump water heater of the smart houses. The proposed method is compared with the power flow in thermal units operation without controllable loads and the optimal operation without the transmission constraints. Simulation results show the validation of the proposed method.

  1. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  2. Optimizing water purchases for an Environmental Water Account

    NASA Astrophysics Data System (ADS)

    Lund, J. R.; Hollinshead, S. P.

    2005-12-01

    State and federal agencies in California have established an Environmental Water Account (EWA) to buy water to protect endangered fish in the San Francisco Bay/ Sacramento-San Joaquin Delta Estuary. This paper presents a three-stage probabilistic optimization model that identifies least-cost strategies for purchasing water for the EWA given hydrologic, operational, and biological uncertainties. This approach minimizes the expected cost of long-term, spot, and option water purchases to meet uncertain flow dedications for fish. The model prescribes the location, timing, and type of optimal water purchases and can illustrate how least-cost strategies change with hydrologic, operational, biological, and cost inputs. Details of the optimization model's application to California's EWA are provided with a discussion of its utility for strategic planning and policy purposes. Limitations in and sensitivity analysis of the model's representation of EWA operations are discussed, as are operational and research recommendations.

  3. Identifying an optimal cutpoint value for the diagnosis of hypertriglyceridemia in the nonfasting state

    PubMed Central

    White, Khendi T.; Moorthy, M.V.; Akinkuolie, Akintunde O.; Demler, Olga; Ridker, Paul M; Cook, Nancy R.; Mora, Samia

    2015-01-01

    Background Nonfasting triglycerides are similar to or superior to fasting triglycerides at predicting cardiovascular events. However, diagnostic cutpoints are based on fasting triglycerides. We examined the optimal cutpoint for increased nonfasting triglycerides. Methods Baseline nonfasting (<8 hours since last meal) samples were obtained from 6,391 participants in the Women’s Health Study, followed prospectively for up to 17 years. The optimal diagnostic threshold for nonfasting triglycerides, determined by logistic regression models using c-statistics and Youden index (sum of sensitivity and specificity minus one), was used to calculate hazard ratios for incident cardiovascular events. Performance was compared to thresholds recommended by the American Heart Association (AHA) and European guidelines. Results The optimal threshold was 175 mg/dL (1.98 mmol/L), corresponding to a c-statistic of 0.656 that was statistically better than the AHA cutpoint of 200 mg/dL (c-statistic of 0.628). For nonfasting triglycerides above and below 175 mg/dL, adjusting for age, hypertension, smoking, hormone use, and menopausal status, the hazard ratio for cardiovascular events was 1.88 (95% CI, 1.52–2.33, P<0.001), and for triglycerides measured at 0–4 and 4–8 hours since last meal, hazard ratios (95%CIs) were 2.05 (1.54– 2.74) and 1.68 (1.21–2.32), respectively. Performance of this optimal cutpoint was validated using ten-fold cross-validation and bootstrapping of multivariable models that included standard risk factors plus total and HDL cholesterol, diabetes, body-mass index, and C-reactive protein. Conclusions In this study of middle aged and older apparently healthy women, we identified a diagnostic threshold for nonfasting hypertriglyceridemia of 175 mg/dL (1.98 mmol/L), with the potential to more accurately identify cases than the currently recommended AHA cutpoint. PMID:26071491

  4. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  5. Research on Operation Strategy for Bundled Wind-thermal Generation Power Systems Based on Two-Stage Optimization Model

    NASA Astrophysics Data System (ADS)

    Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu

    2017-05-01

    Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.

  6. A novel membrane-based process to isolate peroxidase from horseradish roots: optimization of operating parameters.

    PubMed

    Liu, Jianguo; Yang, Bo; Chen, Changzhen

    2013-02-01

    The optimization of operating parameters for the isolation of peroxidase from horseradish (Armoracia rusticana) roots with ultrafiltration (UF) technology was systemically studied. The effects of UF operating conditions on the transmission of proteins were quantified using the parameter scanning UF. These conditions included solution pH, ionic strength, stirring speed and permeate flux. Under optimized conditions, the purity of horseradish peroxidase (HRP) obtained was greater than 84 % after a two-stage UF process and the recovery of HRP from the feedstock was close to 90 %. The resulting peroxidase product was then analysed by isoelectric focusing, SDS-PAGE and circular dichroism, to confirm its isoelectric point, molecular weight and molecular secondary structure. The effects of calcium ion on HRP specific activities were also experimentally determined.

  7. Study on the water resources optimal operation based on riverbed wind erosion control in West Liaohe River plain

    NASA Astrophysics Data System (ADS)

    Wanguang, Sun; Chengzhen, Li; Baoshan, Fan

    2018-06-01

    Rivers are drying up most frequently in West Liaohe River plain and the bare river beds present fine sand belts on land. These sand belts, which yield a dust heavily in windy days, stress the local environment deeply as the riverbeds are eroded by wind. The optimal operation of water resources, thus, is one of the most important methods for preventing the wind erosion of riverbeds. In this paper, optimal operation model for water resources based on riverbed wind erosion control has been established, which contains objective function, constraints, and solution method. The objective function considers factors which include water volume diverted into reservoirs, river length and lower threshold of flow rate, etc. On the basis of ensuring the water requirement of each reservoir, the destruction of the vegetation in the riverbed by the frequent river flow is avoided. The multi core parallel solving method for optimal water resources operation in the West Liaohe River Plain is proposed, which the optimal solution is found by DPSA method under the POA framework and the parallel computing program is designed in Fork/Join mode. Based on the optimal operation results, the basic rules of water resources operation in the West Liaohe River Plain are summarized. Calculation results show that, on the basis of meeting the requirement of water volume of every reservoir, the frequency of reach river flow which from Taihekou to Talagan Water Diversion Project in the Xinkai River is reduced effectively. The speedup and parallel efficiency of parallel algorithm are 1.51 and 0.76 respectively, and the computing time is significantly decreased. The research results show in this paper can provide technical support for the prevention and control of riverbed wind erosion in the West Liaohe River plain.

  8. Optimization of the Nano-Dust Analyzer (NDA) for operation under solar UV illumination

    NASA Astrophysics Data System (ADS)

    O`Brien, L.; Grün, E.; Sternovsky, Z.

    2015-12-01

    The performance of the Nano-Dust Analyzer (NDA) instrument is analyzed for close pointing to the Sun, finding the optimal field-of-view (FOV), arrangement of internal baffles and measurement requirements. The laboratory version of the NDA instrument was recently developed (O'Brien et al., 2014) for the detection and elemental composition analysis of nano-dust particles. These particles are generated near the Sun by the collisional breakup of interplanetary dust particles (IDP), and delivered to Earth's orbit through interaction with the magnetic field of the expanding solar wind plasma. NDA is operating on the basis of impact ionization of the particle and collecting the generated ions in a time-of-flight fashion. The challenge in the measurement is that nano-dust particles arrive from a direction close to that of the Sun and thus the instrument is exposed to intense ultraviolet (UV) radiation. The performed optical ray-tracing analysis shows that it is possible to suppress the number of UV photons scattering into NDA's ion detector to levels that allow both high signal-to-noise ratio measurements, and long-term instrument operation. Analysis results show that by avoiding direct illumination of the target, the photon flux reaching the detector is reduced by a factor of about 103. Furthermore, by avoiding the target and also implementing a low-reflective coating, as well as an optimized instrument geometry consisting of an internal baffle system and a conical detector housing, the photon flux can be reduced by a factor of 106, bringing it well below the operation requirement. The instrument's FOV is optimized for the detection of nano-dust particles, while excluding the Sun. With the Sun in the FOV, the instrument can operate with reduced sensitivity and for a limited duration. The NDA instrument is suitable for future space missions to provide the unambiguous detection of nano-dust particles, to understand the conditions in the inner heliosphere and its temporal

  9. Optimal operation of turbo blowers serially connected using inlet vanes

    NASA Astrophysics Data System (ADS)

    Jang, Choon-Man

    2011-03-01

    Optimal operation of the turbo blowers having an inlet vane has been studied to understand the blowers' operating performance. To analyze three-dimensional flow field in the turbo blowers serially connected, general analysis code, CFX, is introduced in the present work. SST turbulence model is employed to estimate the eddy viscosity. Throughout the numerical analysis, it is found that the flow rates of the turbo blowers can be controlled at the vane angle between 90 (full open condition) degrees and 60 degrees effectively, because pressure loss rapidly increases below 60 degree of a vane angle. Efficiency also has almost the same values from 90 degrees to 60 degrees of a vane angle. It is noted that the distorted inlet velocity generated in the small vane angle makes performance deterioration of the turbo blowers due to the local leading edge separation and the following non-uniform blade loading.

  10. Design principles and operating principles: the yin and yang of optimal functioning.

    PubMed

    Voit, Eberhard O

    2003-03-01

    Metabolic engineering has as a goal the improvement of yield of desired products from microorganisms and cell lines. This goal has traditionally been approached with experimental biotechnological methods, but it is becoming increasingly popular to precede the experimental phase by a mathematical modeling step that allows objective pre-screening of possible improvement strategies. The models are either linear and represent the stoichiometry and flux distribution in pathways or they are non-linear and account for the full kinetic behavior of the pathway, which is often significantly effected by regulatory signals. Linear flux analysis is simpler and requires less input information than a full kinetic analysis, and the question arises whether the consideration of non-linearities is really necessary for devising optimal strategies for yield improvements. The article analyzes this question with a generic, representative pathway. It shows that flux split ratios, which are the key criterion for linear flux analysis, are essentially sufficient for unregulated, but not for regulated branch points. The interrelationships between regulatory design on one hand and optimal patterns of operation on the other suggest the investigation of operating principles that complement design principles, like a user's manual complements the hardwiring of electronic equipment.

  11. Long-term energy capture and the effects of optimizing wind turbine operating strategies

    NASA Technical Reports Server (NTRS)

    Miller, A. H.; Formica, W. J.

    1982-01-01

    Methods of increasing energy capture without affecting the turbine design were investigated. The emphasis was on optimizing the wind turbine operating strategy. The operating strategy embodies the startup and shutdown algorithm as well as the algorithm for determining when to yaw (rotate) the axis of the turbine more directly into the wind. Using data collected at a number of sites, the time-dependent simulation of a MOD-2 wind turbine using various, site-dependent operating strategies provided evidence that site-specific fine tuning can produce significant increases in long-term energy capture as well as reduce the number of start-stop cycles and yawing maneuvers, which may result in reduced fatigue and subsequent maintenance.

  12. Derivation of Optimal Operating Rules for Large-scale Reservoir Systems Considering Multiple Trade-off

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.

    2017-12-01

    Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.

  13. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    NASA Astrophysics Data System (ADS)

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell

  14. A method to identify the main mode of machine tool under operating conditions

    NASA Astrophysics Data System (ADS)

    Wang, Daming; Pan, Yabing

    2017-04-01

    The identification of the modal parameters under experimental conditions is the most common procedure when solving the problem of machine tool structure vibration. However, the influence of each mode on the machine tool vibration in real working conditions remains unknown. In fact, the contributions each mode makes to the machine tool vibration during machining process are different. In this article, an active excitation modal analysis is applied to identify the modal parameters in operational condition, and the Operating Deflection Shapes (ODS) in frequencies of high level vibration that affect the quality of machining in real working conditions are obtained. Then, the ODS is decomposed by the mode shapes which are identified in operational conditions. So, the contributions each mode makes to machine tool vibration during machining process are got by decomposition coefficients. From the previous steps, we can find out the main modes which effect the machine tool more significantly in working conditions. This method was also verified to be effective by experiments.

  15. Optimal Operation of Variable Speed Pumping System in China's Eastern Route Project of S-to-N Water Diversion Project

    NASA Astrophysics Data System (ADS)

    Cheng, Jilin; Zhang, Lihua; Zhang, Rentian; Gong, Yi; Zhu, Honggeng; Deng, Dongsheng; Feng, Xuesong; Qiu, Jinxian

    2010-06-01

    A dynamic planning model for optimizing operation of variable speed pumping system, aiming at minimum power consumption, was proposed to achieve economic operation. The No. 4 Jiangdu Pumping Station, a source pumping station in China's Eastern Route of South-to-North Water Diversion Project, is taken as a study case. Since the sump water level of Jiangdu Pumping Station is affected by the tide of Yangtze River, the daily-average heads of the pumping system varies yearly from 3.8m to 7.8m and the tide level difference in one day up to 1.2m. Comparisons of operation electricity cost between optimized variable speed and fixed speed operations of pumping system were made. When the full load operation mode is adopted, whether or not electricity prices in peak-valley periods are considered, the benefits of variable speed operation cannot compensate the energy consumption of the VFD. And when the pumping system operates in part load and the peak-valley electricity prices are considered, the pumping system should cease operation or lower its rotational speed in peak load hours since the electricity price are much higher, and to the contrary the pumping system should raise its rotational speed in valley load hours to pump more water. The computed results show that if the pumping system operates in 80% or 60% loads, the energy consumption cost of specified volume of water will save 14.01% and 26.69% averagely by means of optimal variable speed operation, and the investment on VFD will be paid back in 2 or 3 years. However, if the pumping system operates in 80% or 60% loads and the energy cost is calculated in non peak-valley electricity price, the repayment will be lengthened up to 18 years. In China's S-to-N Water Diversion Project, when the market operation and peak-valley electricity prices are taken into effect to supply water and regulate water levels in regulation reservoirs as Hongzehu Lake, Luomahu Lake, etc. the economic operation of water-diversion pumping stations

  16. Optimization of Large-Scale Daily Hydrothermal System Operations With Multiple Objectives

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cheng, Chuntian; Shen, Jianjian; Cao, Rui; Yeh, William W.-G.

    2018-04-01

    This paper proposes a practical procedure for optimizing the daily operation of a large-scale hydrothermal system. The overall procedure optimizes a monthly model over a period of 1 year and a daily model over a period of up to 1 month. The outputs from the monthly model are used as inputs and boundary conditions for the daily model. The models iterate and update when new information becomes available. The monthly hydrothermal model uses nonlinear programing (NLP) to minimize fuel costs, while maximizing hydropower production. The daily model consists of a hydro model, a thermal model, and a combined hydrothermal model. The hydro model and thermal model generate the initial feasible solutions for the hydrothermal model. The two competing objectives considered in the daily hydrothermal model are minimizing fuel costs and minimizing thermal emissions. We use the constraint method to develop the trade-off curve (Pareto front) between these two objectives. We apply the proposed methodology on the Yunnan hydrothermal system in China. The system consists of 163 individual hydropower plants with an installed capacity of 48,477 MW and 11 individual thermal plants with an installed capacity of 12,400 MW. We use historical operational records to verify the correctness of the model and to test the robustness of the methodology. The results demonstrate the practicability and validity of the proposed procedure.

  17. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  18. Prediction of chronic post-operative pain: pre-operative DNIC testing identifies patients at risk.

    PubMed

    Yarnitsky, David; Crispel, Yonathan; Eisenberg, Elon; Granovsky, Yelena; Ben-Nun, Alon; Sprecher, Elliot; Best, Lael-Anson; Granot, Michal

    2008-08-15

    Surgical and medical procedures, mainly those associated with nerve injuries, may lead to chronic persistent pain. Currently, one cannot predict which patients undergoing such procedures are 'at risk' to develop chronic pain. We hypothesized that the endogenous analgesia system is key to determining the pattern of handling noxious events, and therefore testing diffuse noxious inhibitory control (DNIC) will predict susceptibility to develop chronic post-thoracotomy pain (CPTP). Pre-operative psychophysical tests, including DNIC assessment (pain reduction during exposure to another noxious stimulus at remote body area), were conducted in 62 patients, who were followed 29.0+/-16.9 weeks after thoracotomy. Logistic regression revealed that pre-operatively assessed DNIC efficiency and acute post-operative pain intensity were two independent predictors for CPTP. Efficient DNIC predicted lower risk of CPTP, with OR 0.52 (0.33-0.77 95% CI, p=0.0024), i.e., a 10-point numerical pain scale (NPS) reduction halves the chance to develop chronic pain. Higher acute pain intensity indicated OR of 1.80 (1.28-2.77, p=0.0024) predicting nearly a double chance to develop chronic pain for each 10-point increase. The other psychophysical measures, pain thresholds and supra-threshold pain magnitudes, did not predict CPTP. For prediction of acute post-operative pain intensity, DNIC efficiency was not found significant. Effectiveness of the endogenous analgesia system obtained at a pain-free state, therefore, seems to reflect the individual's ability to tackle noxious events, identifying patients 'at risk' to develop post-intervention chronic pain. Applying this diagnostic approach before procedures that might generate pain may allow individually tailored pain prevention and management, which may substantially reduce suffering.

  19. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  20. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    NASA Astrophysics Data System (ADS)

    Deufel, Christopher L.; Furutani, Keith M.

    2014-02-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  1. Optimization of shared autonomy vehicle control architectures for swarm operations.

    PubMed

    Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R

    2010-08-01

    The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.

  2. Better Redd than Dead: Optimizing Reservoir Operations for Wild Fish Survival During Drought

    NASA Astrophysics Data System (ADS)

    Adams, L. E.; Lund, J. R.; Quiñones, R.

    2014-12-01

    Extreme droughts are difficult to predict and may incur large economic and ecological costs. Dam operations in drought usually consider minimizing economic costs. However, dam operations also offer an opportunity to increase wild fish survival under difficult conditions. Here, we develop a probabilistic optimization approach to developing reservoir release schedules to maximize fish survival in regulated rivers. A case study applies the approach to wild Fall-run Chinook Salmon below Folsom Dam on California's American River. Our results indicate that releasing more water early in the drought will, on average, save more wild fish over the long term.

  3. Completion of potential conflicts of interest through optimization of Rukoh reservoir operation in Pidie District, Aceh Province, Indonesia

    NASA Astrophysics Data System (ADS)

    Azmeri, Hadihardaja, Iwan K.; Shaskia, Nina; Admaja, Kamal Surya

    2017-11-01

    Rukoh Reservoir's construction was planned to be built in Krueng Rukoh Watershed with supplet ion from Krueng Tiro River. Rukoh Reservoir operating system as a multipurpose reservoir raised potential conflict of interest between raw water and irrigation water. In this study, the operating system of Rukoh Reservoirs was designed to supply raw water in Titeu Sub-District and replenish water shortage in Baro Irrigation Area which is not able to be served by the Keumala Weir. Reservoir operating system should be planned optimally so that utilization of water in accordance with service area demands. Reservoir operation method was analyzed by using optimization technique with nonlinear programming. Optimization of reservoir operation is intended to minimize potential conflicts of interest in the operation. Suppletion discharge from Krueng Tiro River amounted to 46.62%, which was calculated based on ratio of Baro and Tiro irrigation area. However, during dry seasons, water demands could not be fully met, so there was a shortage of water. By considering the rules to minimize potential conflicts of interest between raw water and irrigation water, it would require suppletion from Krueng Tiro amounted to 52.30%. The increment of suppletion volume could minimize conflicts of interest. It produced l00% reservoir reliability for raw water and irrigation demands. Rukoh reservoir could serve raw water demands of Titeu Sub-District and irrigation demands of Baro irrigation area which is covering an area of 6,047 hectars. Reservoir operation guidelines can specify reservoir water release to balance the demands and the target storage.

  4. Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology

    NASA Astrophysics Data System (ADS)

    Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu

    2013-08-01

    From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.

  5. Generalized filtering of laser fields in optimal control theory: application to symmetry filtering of quantum gate operations

    NASA Astrophysics Data System (ADS)

    Schröder, Markus; Brown, Alex

    2009-10-01

    We present a modified version of a previously published algorithm (Gollub et al 2008 Phys. Rev. Lett.101 073002) for obtaining an optimized laser field with more general restrictions on the search space of the optimal field. The modification leads to enforcement of the constraints on the optimal field while maintaining good convergence behaviour in most cases. We demonstrate the general applicability of the algorithm by imposing constraints on the temporal symmetry of the optimal fields. The temporal symmetry is used to reduce the number of transitions that have to be optimized for quantum gate operations that involve inversion (NOT gate) or partial inversion (Hadamard gate) of the qubits in a three-dimensional model of ammonia.

  6. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ying; Zhou, Zhi; Botterud, Audun

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less

  7. Application of Decision Tree to Obtain Optimal Operation Rules for Reservoir Flood Control Considering Sediment Desilting-Case Study of Tseng Wen Reservoir

    NASA Astrophysics Data System (ADS)

    ShiouWei, L.

    2014-12-01

    Reservoirs are the most important water resources facilities in Taiwan.However,due to the steep slope and fragile geological conditions in the mountain area,storm events usually cause serious debris flow and flood,and the flood then will flush large amount of sediment into reservoirs.The sedimentation caused by flood has great impact on the reservoirs life.Hence,how to operate a reservoir during flood events to increase the efficiency of sediment desilting without risk the reservoir safety and impact the water supply afterward is a crucial issue in Taiwan.  Therefore,this study developed a novel optimization planning model for reservoir flood operation considering flood control and sediment desilting,and proposed easy to use operating rules represented by decision trees.The decision trees rules have considered flood mitigation,water supply and sediment desilting.The optimal planning model computes the optimal reservoir release for each flood event that minimum water supply impact and maximum sediment desilting without risk the reservoir safety.Beside the optimal flood operation planning model,this study also proposed decision tree based flood operating rules that were trained by the multiple optimal reservoir releases to synthesis flood scenarios.The synthesis flood scenarios consists of various synthesis storm events,reservoir's initial storage and target storages at the end of flood operating.  Comparing the results operated by the decision tree operation rules(DTOR) with that by historical operation for Krosa Typhoon in 2007,the DTOR removed sediment 15.4% more than that of historical operation with reservoir storage only8.38×106m3 less than that of historical operation.For Jangmi Typhoon in 2008,the DTOR removed sediment 24.4% more than that of historical operation with reservoir storage only 7.58×106m3 less than that of historical operation.The results show that the proposed DTOR model can increase the sediment desilting efficiency and extend the

  8. Dynamic emulation modelling for the optimal operation of water systems: an overview

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Giuliani, M.

    2014-12-01

    Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.

  9. Application of Spatial Neural Network Model for Optimal Operation of Urban Drainage System

    NASA Astrophysics Data System (ADS)

    KIM, B. J.; Lee, J. Y.; KIM, H. I.; Son, A. L.; Han, K. Y.

    2017-12-01

    The significance of real-time operation of drainage pump and warning system for inundation becomes recently increased in order to coping with runoff by high intensity precipitation such as localized heavy rain that frequently and suddenly happen. However existing operation of drainage pump station has been made a decision according to opinion of manager based on stage because of not expecting exact time that peak discharge occur in pump station. Therefore the scale of pump station has been excessively estimated. Although it is necessary to perform quick and accurate inundation in analysis downtown area due to huge property damage from flood and typhoon, previous studies contained risk deducting incorrect result that differs from actual result owing to the diffusion aspect of flow by effect on building and road. The purpose of this study is to develop the data driven model for the real-time operation of drainage pump station and two-dimensional inundation analysis that are improved the problems of the existing hydrology and hydrological model. Neuro-Fuzzy system for real time prediction about stage was developed by estimating the type and number of membership function. Based on forecasting stage, it was decided when pump machine begin to work and how much water scoop up by using penalizing genetic algorithm. It is practicable to forecast stage, optimize pump operation and simulate inundation analysis in real time through the methodologies suggested in this study. This study can greatly contribute to the establishment of disaster information map that prevent and mitigate inundation in urban drainage area. The applicability of the development model for the five drainage pump stations in the Mapo drainage area was verified. It is considered to be able to effectively manage urban drainage facilities in the development of these operating rules. Keywords : Urban flooding; Geo-ANFIS method; Optimal operation; Drainage system; AcknowlegementThis research was supported by a

  10. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques

    DOEpatents

    Wroblewski, David [Mentor, OH; Katrompas, Alexander M [Concord, OH; Parikh, Neel J [Richmond Heights, OH

    2009-09-01

    A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.

  11. Optimal laser wavelength for efficient laser power converter operation over temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Höhn, O., E-mail: oliver.hoehn@ise.fraunhofer.de; Walker, A. W.; Bett, A. W.

    2016-06-13

    A temperature dependent modeling study is conducted on a GaAs laser power converter to identify the optimal incident laser wavelength for optical power transmission. Furthermore, the respective temperature dependent maximal conversion efficiencies in the radiative limit as well as in a practically achievable limit are presented. The model is based on the transfer matrix method coupled to a two-diode model, and is calibrated to experimental data of a GaAs photovoltaic device over laser irradiance and temperature. Since the laser wavelength does not strongly influence the open circuit voltage of the laser power converter, the optimal laser wavelength is determined tomore » be in the range where the external quantum efficiency is maximal, but weighted by the photon flux of the laser.« less

  12. Optimation of Operation System Integration between Main and Feeder Public Transport (Case Study: Trans Jakarta-Kopaja Bus Services)

    NASA Astrophysics Data System (ADS)

    Miharja, M.; Priadi, Y. N.

    2018-05-01

    Promoting a better public transport is a key strategy to cope with urban transport problems which are mostly caused by a huge private vehicle usage. A better public transport service quality not only focuses on one type of public transport mode, but also concerns on inter modes service integration. Fragmented inter mode public transport service leads to a longer trip chain as well as average travel time which would result in its failure to compete with a private vehicle. This paper examines the optimation process of operation system integration between Trans Jakarta Bus as the main public transport mode and Kopaja Bus as feeder public transport service in Jakarta. Using scoring-interview method combined with standard parameters in operation system integration, this paper identifies the key factors that determine the success of the two public transport operation system integrations. The study found that some key integration parameters, such as the cancellation of “system setoran”, passenger get in-get out at official stop points, and systematic payment, positively contribute to a better service integration. However, some parameters such as fine system, time and changing point reliability, and information system reliability are among those which need improvement. These findings are very useful for the authority to set the right strategy to improve operation system integration between Trans Jakarta and Kopaja Bus services.

  13. Identifying socioeconomic, epidemiological and operational scenarios for tuberculosis control in Brazil: an ecological study.

    PubMed

    Pelissari, Daniele Maria; Rocha, Marli Souza; Bartholomay, Patricia; Sanchez, Mauro Niskier; Duarte, Elisabeth Carmen; Arakaki-Sanchez, Denise; Dantas, Cíntia Oliveira; Jacobs, Marina Gasino; Andrade, Kleydson Bonfim; Codenotti, Stefano Barbosa; Andrade, Elaine Silva Nascimento; Araújo, Wildo Navegantes de; Costa, Fernanda Dockhorn; Ramalho, Walter Massa; Diaz-Quijano, Fredi Alexander

    2018-06-06

    To identify scenarios based on socioeconomic, epidemiological and operational healthcare factors associated with tuberculosis incidence in Brazil. Ecological study. The study was based on new patients with tuberculosis and epidemiological/operational variables of the disease from the Brazilian National Information System for Notifiable Diseases and the Mortality Information System. We also analysed socioeconomic and demographic variables. The units of analysis were the Brazilian municipalities, which in 2015 numbered 5570 but 5 were excluded due to the absence of socioeconomic information. Tuberculosis incidence rate in 2015. We evaluated as independent variables the socioeconomic (2010), epidemiological and operational healthcare indicators of tuberculosis (2014 or 2015) using negative binomial regression. Municipalities were clustered by the k-means method considering the variables identified in multiple regression models. We identified two clusters according to socioeconomic variables associated with the tuberculosis incidence rate (unemployment rate and household crowding): a higher socioeconomic scenario (n=3482 municipalities) with a mean tuberculosis incidence rate of 16.3/100 000 population and a lower socioeconomic scenario (2083 municipalities) with a mean tuberculosis incidence rate of 22.1/100 000 population. In a second stage of clusterisation, we defined four subgroups in each of the socioeconomic scenarios using epidemiological and operational variables such as tuberculosis mortality rate, AIDS case detection rate and proportion of vulnerable population among patients with tuberculosis. Some of the subscenarios identified were characterised by fragility in their information systems, while others were characterised by the concentration of tuberculosis cases in key populations. Clustering municipalities in scenarios allowed us to classify them according to the socioeconomic, epidemiological and operational variables associated with tuberculosis

  14. Mathematical model for the analysis of structure and optimal operational parameters of a solid oxide fuel cell generator

    NASA Astrophysics Data System (ADS)

    Coralli, Alberto; Villela de Miranda, Hugo; Espiúca Monteiro, Carlos Felipe; Resende da Silva, José Francisco; Valadão de Miranda, Paulo Emílio

    2014-12-01

    Solid oxide fuel cells are globally recognized as a very promising technology in the area of highly efficient electricity generation with a low environmental impact. This technology can be advantageously implemented in many situations in Brazil and it is well suited to the use of ethanol as a primary energy source, an important feature given the highly developed Brazilian ethanol industry. In this perspective, a simplified mathematical model is developed for a fuel cell and its balance of plant, in order to identify the optimal system structure and the most convenient values for the operational parameters, with the aim of maximizing the global electric efficiency. In this way it is discovered the best operational configuration for the desired application, which is the distributed generation in the concession area of the electricity distribution company Elektro. The data regarding this configuration are required for the continuation of the research project, i.e. the development of a prototype, a cost analysis of the developed system and a detailed perspective of the market opportunities in Brazil.

  15. Performance Comparison of Optimized Designs of Francis Turbines Exposed to Sediment Erosion in various Operating Conditions

    NASA Astrophysics Data System (ADS)

    Shrestha, K. P.; Chitrakar, S.; Thapa, B.; Dahlhaug, O. G.

    2018-06-01

    Erosion on hydro turbine mostly depends on impingement velocity, angle of impact, concentration, shape, size and distribution of erodent particle and substrate material. In the case of Francis turbines, the sediment particles tend to erode more in the off-designed conditions than at the best efficiency point. Previous studies focused on the optimized runner blade design to reduce erosion at the designed flow. However, the effect of the change in the design on other operating conditions was not studied. This paper demonstrates the performance of optimized Francis turbine exposed to sediment erosion in various operating conditions. Comparative study has been carryout among the five different shapes of runner, different set of guide vane and stay vane angles. The effect of erosion is studied in terms of average erosion density rate on optimized design Francis runner with Lagrangian particle tracking method in CFD analysis. The numerical sensitivity of the results are investigated by comparing two turbulence models. Numerical results are validated from the velocity measurements carried out in the actual turbine. Results show that runner blades are susceptible to more erosion at part load conditions compared to BEP, whereas for the case of guide vanes, more erosion occurs at full load conditions. Out of the five shapes compared, Shape 5 provides an optimum combination of efficiency and erosion on the studied operating conditions.

  16. Optimized operation of dielectric laser accelerators: Single bunch

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-05-01

    We introduce a general approach to determine the optimal charge, efficiency and gradient for laser driven accelerators in a self-consistent way. We propose a way to enhance the operational gradient of dielectric laser accelerators by leverage of beam-loading effect. While the latter may be detrimental from the perspective of the effective gradient experienced by the particles, it can be beneficial as the effective field experienced by the accelerating structure, is weaker. As a result, the constraint imposed by the damage threshold fluence is accordingly weakened and our self-consistent approach predicts permissible gradients of ˜10 GV /m , one order of magnitude higher than previously reported experimental results—with unbunched pulse of electrons. Our approach leads to maximum efficiency to occur for higher gradients as compared with a scenario in which the beam-loading effect on the material is ignored. In any case, maximum gradient does not occur for the same conditions that maximum efficiency does—a trade-off set of parameters is suggested.

  17. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  18. 14 CFR 294.22 - Notification to the Department of change in operations or identifying information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Notification to the Department of change in operations or identifying information. 294.22 Section 294.22 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS CANADIAN CHARTER AIR TAXI OPERATORS...

  19. 14 CFR 294.22 - Notification to the Department of change in operations or identifying information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Notification to the Department of change in operations or identifying information. 294.22 Section 294.22 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS CANADIAN CHARTER AIR TAXI OPERATORS...

  20. Integrated Urban Flood Analysis considering Optimal Operation of Flood Control Facilities in Urban Drainage Networks

    NASA Astrophysics Data System (ADS)

    Moon, Y. I.; Kim, M. S.; Choi, J. H.; Yuk, G. M.

    2017-12-01

    eavy rainfall has become a recent major cause of urban area flooding due to the climate change and urbanization. To prevent property damage along with casualties, a system which can alert and forecast urban flooding must be developed. Optimal performance of reducing flood damage can be expected of urban drainage facilities when operated in smaller rainfall events over extreme ones. Thus, the purpose of this study is to execute: A) flood forecasting system using runoff analysis based on short term rainfall; and B) flood warning system which operates based on the data from pump stations and rainwater storage in urban basins. In result of the analysis, it is shown that urban drainage facilities using short term rainfall forecasting data by radar will be more effective to reduce urban flood damage than using only the inflow data of the facility. Keywords: Heavy Rainfall, Urban Flood, Short-term Rainfall Forecasting, Optimal operating of urban drainage facilities. AcknowledgmentsThis research was supported by a grant (17AWMP-B066744-05) from Advanced Water Management Research Program (AWMP) funded by Ministry of Land, Infrastructure and Transport of Korean government.

  1. Supporting the Maritime Information Dominance: Optimizing Tactical Network for Biometric Data Sharing in Maritime Interdiction Operations

    DTIC Science & Technology

    2015-03-01

    information dominance in the maritime domain by optimizing tactical mobile ad hoc network (MANET) systems for wireless sharing of biometric data in maritime interdiction operations (MIO). Current methods for sharing biometric data in MIO are unnecessarily slow and do not leverage wireless networks at the tactical edge to maximize information dominance . Field experiments allow students to test wireless MANETs at the tactical edge. Analysis is focused on determining optimal MANET design and implementation. It considers various implementations with

  2. Identifying the optimal spatially and temporally invariant root distribution for a semiarid environment

    NASA Astrophysics Data System (ADS)

    Sivandran, Gajan; Bras, Rafael L.

    2012-12-01

    In semiarid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Vegetation roots have strong control over this partitioning, and assuming a static root profile, predetermine the manner in which this partitioning is undertaken.A coupled, dynamic vegetation and hydrologic model, tRIBS + VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point-scale simulations were carried out using two spatially and temporally invariant rooting schemes: uniform: a one-parameter model and logistic: a two-parameter model. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semiarid Walnut Gulch Experimental Watershed (WGEW) in Arizona. A series of simulations were undertaken exploring the parameter space of both rooting schemes and the optimal root distribution for the simulation, which was defined as the root distribution with the maximum mean transpiration over a 100-yr period, and this was identified. This optimal root profile was determined for five generic soil textures and two plant-functional types (PFTs) to illustrate the role of soil texture on the partitioning of moisture at the land surface. The simulation results illustrate the strong control soil texture has on the partitioning of rainfall and consequently the depth of the optimal rooting profile. High-conductivity soils resulted in the deepest optimal rooting profile with land surface moisture fluxes dominated by transpiration. As we move toward the lower conductivity end of the soil spectrum, a shallowing of the optimal rooting profile is observed and evaporation gradually becomes the dominate flux from the land surface. This study offers a methodology through which local plant, soil, and climate can be

  3. Concept development and needs identification for intelligent network flow optimization (INFLO) : concept of operations.

    DOT National Transportation Integrated Search

    2012-06-01

    The purpose of this project is to develop for the Intelligent Network Flow Optimization (INFLO), which is one collection (or bundle) of high-priority transformative applications identified by the United States Department of Transportation (USDOT) Mob...

  4. Optimal shutdown management

    NASA Astrophysics Data System (ADS)

    Bottasso, C. L.; Croce, A.; Riboldi, C. E. D.

    2014-06-01

    The paper presents a novel approach for the synthesis of the open-loop pitch profile during emergency shutdowns. The problem is of interest in the design of wind turbines, as such maneuvers often generate design driving loads on some of the machine components. The pitch profile synthesis is formulated as a constrained optimal control problem, solved numerically using a direct single shooting approach. A cost function expressing a compromise between load reduction and rotor overspeed is minimized with respect to the unknown blade pitch profile. Constraints may include a load reduction not-to-exceed the next dominating loads, a not-to-be-exceeded maximum rotor speed, and a maximum achievable blade pitch rate. Cost function and constraints are computed over a possibly large number of operating conditions, defined so as to cover as well as possible the operating situations encountered in the lifetime of the machine. All such conditions are simulated by using a high-fidelity aeroservoelastic model of the wind turbine, ensuring the accuracy of the evaluation of all relevant parameters. The paper demonstrates the capabilities of the novel proposed formulation, by optimizing the pitch profile of a multi-MW wind turbine. Results show that the procedure can reliably identify optimal pitch profiles that reduce design-driving loads, in a fully automated way.

  5. Enhanced genetic algorithm optimization model for a single reservoir operation based on hydropower generation: case study of Mosul reservoir, northern Iraq.

    PubMed

    Al-Aqeeli, Yousif H; Lee, T S; Abd Aziz, S

    2016-01-01

    Achievement of the optimal hydropower generation from operation of water reservoirs, is a complex problems. The purpose of this study was to formulate and improve an approach of a genetic algorithm optimization model (GAOM) in order to increase the maximization of annual hydropower generation for a single reservoir. For this purpose, two simulation algorithms were drafted and applied independently in that GAOM during 20 scenarios (years) for operation of Mosul reservoir, northern Iraq. The first algorithm was based on the traditional simulation of reservoir operation, whilst the second algorithm (Salg) enhanced the GAOM by changing the population values of GA through a new simulation process of reservoir operation. The performances of these two algorithms were evaluated through the comparison of their optimal values of annual hydropower generation during the 20 scenarios of operating. The GAOM achieved an increase in hydropower generation in 17 scenarios using these two algorithms, with the Salg being superior in all scenarios. All of these were done prior adding the evaporation (Ev) and precipitation (Pr) to the water balance equation. Next, the GAOM using the Salg was applied by taking into consideration the volumes of these two parameters. In this case, the optimal values obtained from the GAOM were compared, firstly with their counterpart that found using the same algorithm without taking into consideration of Ev and Pr, secondly with the observed values. The first comparison showed that the optimal values obtained in this case decreased in all scenarios, whilst maintaining the good results compared with the observed in the second comparison. The results proved the effectiveness of the Salg in increasing the hydropower generation through the enhanced approach of the GAOM. In addition, the results indicated to the importance of taking into account the Ev and Pr in the modelling of reservoirs operation.

  6. Multi-objective Operation Chart Optimization for Aquatic Species Habitat Conservation of Cascaded Hydropower System on Yuan River, Southwestern China

    NASA Astrophysics Data System (ADS)

    Wen, X.; Lei, X.; Fang, G.; Huang, X.

    2017-12-01

    Extensive cascading hydropower exploitation in southwestern China has been the subject of debate and conflict in recent years. Introducing limited ecological curves, a novel approach for derivation of hydropower-ecological joint operation chart of cascaded hydropower system was proposed, aiming to optimize the general hydropower and ecological benefits, and to alleviate the ecological deterioration in specific flood/dry conditions. The physical habitat simulation model is proposed initially to simulate the relationship between streamflow and physical habitat of target fish species and to determine the optimal ecological flow range of representative reach. The ecological—hydropower joint optimization model is established to produce the multi-objective operation chart of cascaded hydropower system. Finally, the limited ecological guiding curves were generated and added into the operation chart. The JS-MDS cascaded hydropower system on the Yuan River in southwestern China is employed as the research area. As the result, the proposed guiding curves could increase the hydropower production amount by 1.72% and 5.99% and optimize ecological conservation degree by 0.27% and 1.13% for JS and MDS Reservoir, respectively. Meanwhile, the ecological deterioration rate also sees a decrease from 6.11% to 1.11% for JS Reservoir and 26.67% to 3.89% for MDS Reservoir.

  7. Optimizing operational water management with soil moisture data from Sentinel-1 satellites

    NASA Astrophysics Data System (ADS)

    Pezij, Michiel; Augustijn, Denie; Hendriks, Dimmie; Hulscher, Suzanne

    2016-04-01

    In the Netherlands, regional water authorities are responsible for management and maintenance of regional water bodies. Due to socio-economic developments (e.g. agricultural intensification and on-going urbanisation) and an increase in climate variability, the pressure on these water bodies is growing. Optimization of water availability by taking into account the needs of different users, both in wet and dry periods, is crucial for sustainable developments. To support timely and well-directed operational water management, accurate information on the current state of the system as well as reliable models to evaluate water management optimization measures are essential. Previous studies showed that the use of remote sensing data (for example soil moisture data) in water management offers many opportunities (e.g. Wanders et al. (2014)). However, these data are not yet used in operational applications at a large scale. The Sentinel-1 satellites programme offers high spatiotemporal resolution soil moisture data (1 image per 6 days with a spatial resolution of 10 by 10 m) that are freely available. In this study, these data will be used to improve the Netherlands Hydrological Instrument (NHI). The NHI consists of coupled models for the unsaturated zone (MetaSWAP), groundwater (iMODFLOW) and surface water (Mozart and DM). The NHI is used for scenario analyses and operational water management in the Netherlands (De Lange et al., 2014). Due to the lack of soil moisture data, the unsaturated zone model is not yet thoroughly validated and its output is not used by regional water authorities for decision-making. Therefore, the newly acquired remotely sensed soil moisture data will be used to improve the skill of the MetaSWAP-model and the NHI as whole. The research will focus among other things on the calibration of soil parameters by comparing model output (MetaSWAP) with the remotely sensed soil moisture data. Eventually, we want to apply data-assimilation to improve

  8. 49 CFR 192.905 - How does an operator identify a high consequence area?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.905 How does an operator identify a high consequence area? (a...

  9. Optimization of steam generators of NPP with WWER in operation with variable load

    NASA Astrophysics Data System (ADS)

    Parchevskii, V. M.; Shchederkina, T. E.; Gur'yanova, V. V.

    2017-11-01

    The report addresses the issue of the optimal water level in the horizontal steam generators of NPP with WWER. On the one hand, the level needs to be kept at the lower limit of the allowable range, as gravity separation, steam will have the least humidity and the turbine will operate with higher efficiency. On the other hand, the higher the level, the greater the supply of water in the steam generator, and therefore the higher the security level of the unit, because when accidents involving loss of cooling of the reactor core, the water in the steam generators, can be used for cooling. To quantitatively compare the damage from higher level to the benefit of improving the safety was assessed of the cost of one cubic meter of water in the steam generators, the formulated objective function of optimal levels control. This was used two-dimensional separation characteristics of steam generators. It is demonstrated that the security significantly shifts the optimal values of the levels toward the higher values, and this bias is greater the lower the load unit.

  10. Evidence-Based Recommendations for Optimizing Light in Day-to-Day Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    Whitmire, Alexandra; Leveton, Lauren; Barger, Laura; Clark, Toni; Bollweg, Laura; Ohnesorge, Kristine; Brainard, George

    2015-01-01

    NASA Behavioral Health and Performance Element (BHP) personnel have previously reported on efforts to transition evidence-based recommendations for a flexible lighting system on the International Space Station (ISS). Based on these recommendations, beginning in 2016 the ISS will replace the current fluorescent-based lights with an LED-based system to optimize visual performance, facilitate circadian alignment, promote sleep, and hasten schedule shifting. Additional efforts related to lighting countermeasures in spaceflight operations have also been underway. As an example, a recent BHP research study led by investigators at Harvard Medical School and Brigham and Women's Hospital, evaluated the acceptability, feasibility, and effectiveness of blue-enriched light exposure during exercise breaks for flight controllers working the overnight shift in the Mission Control Center (MCC) at NASA Johnson Space Center. This effort, along with published laboratory studies that have demonstrated the effectiveness of appropriately timed light for promoting alertness, served as an impetus for new light options, and educational protocols for flight controllers. In addition, a separate set of guidelines related to the light emitted from electronic devices, were provided to the Astronaut Office this past year. These guidelines were based on an assessment led by NASA's Lighting Environment Test Facility that included measuring the spectral power distribution, irradiance, and radiance of light emitted from ISS-grade laptops and I-Pads, as well as Android devices. Evaluations were conducted with and without the use of off-the-shelf screen filters as well as a software application that touts minimizing the short-wave length of the visible light spectrum. This presentation will focus on the transition for operations process related to lighting countermeasures in the MCC, as well as the evidence to support recommendations for optimal use of laptops, I-Pads, and Android devices during all

  11. Optimal CINAHL search strategies for identifying therapy studies and review articles.

    PubMed

    Wong, Sharon S L; Wilczynski, Nancy L; Haynes, R Brian

    2006-01-01

    To design optimal search strategies for locating sound therapy studies and review articles in CiNAHL in the year 2000. An analytic survey was conducted, comparing hand searches of 75 journals with retrievals from CINAHL for 5,020 candidate search terms and 17,900 combinations for therapy and 5,977 combinations for review articles. All articles were rated with purpose and quality indicators. Candidate search strategies were used in CINAHL, and the retrievals were compared with results of the hand searches. The proposed search strategies were treated as "diagnostic tests" for sound studies and the manual review of the literature was treated as the "gold standard." Operating characteristics of the search strategies were calculated. Of the 1,383 articles about treatment, 506 (36.6%) met basic criteria for scientific merit and 127 (17.9%) of the 711 articles classified as a review met the criteria for systematic reviews. For locating sound treatment studies, a three-term strategy maximized sensitivity at 99.4% but with compromised specificity at 58.3%, and a two-term strategy maximized specificity at 98.5% but with compromised sensitivity at 52.0%. For detecting systematic reviews, a three-term strategy maximized sensitivity at 91.3% while keeping specificity high at 95.4%, and a single-term strategy maximized specificity at 99.6% but with compromised sensitivity at 42.5%. Three-term search strategies optimizing sensitivity and specificity achieved these values over 91% for detecting sound treatment studies and over 76% for detecting systematic reviews. Search strategies combining indexing terms and text words can achieve high sensitivity and specificity for retrieving sound treatment studies and review articles in CINAHL.

  12. Display analysis with the optimal control model of the human operator. [pilot-vehicle display interface and information processing

    NASA Technical Reports Server (NTRS)

    Baron, S.; Levison, W. H.

    1977-01-01

    Application of the optimal control model of the human operator to problems in display analysis is discussed. Those aspects of the model pertaining to the operator-display interface and to operator information processing are reviewed and discussed. The techniques are then applied to the analysis of advanced display/control systems for a Terminal Configured Vehicle. Model results are compared with those obtained in a large, fixed-base simulation.

  13. Optimal Post-Operative Immobilisation for Supracondylar Humeral Fractures.

    PubMed

    Azzolin, Lucas; Angelliaume, Audrey; Harper, Luke; Lalioui, Abdelfettah; Delgove, Anaïs; Lefèvre, Yan

    2018-05-25

    Supracondylar humeral fractures (SCHFs) are very common in paediatric patients. In France, percutaneous fixation with two lateral-entry pins is widely used after successful closed reduction. Post-operative immobilisation is typically with a long arm cast combined with a tubular-bandage sling that immobilises the shoulder and holds the arm in adduction and internal rotation to prevent external rotation of the shoulder, which might cause secondary displacement. The objective of this study was to compare this standard immobilisation technique to a posterior plaster splint with a simple sling. Secondary displacement is not more common with a posterior plaster splint and sling than with a long arm cast. 100 patients with extension Gartland type III SCHFs managed by closed reduction and percutaneous fixation with two lateral-entry pins between December 2011 and December 2015 were assessed retrospectively. Post-operative immobilisation was with a posterior plaster splint and a simple sling worn for 4 weeks. Radiographs were obtained on days 1, 45, and 90. Secondary displacement occurred in 8% of patients. No patient required revision surgery. The secondary displacement rate was comparable to earlier reports. Of the 8 secondary displacements, 5 were ascribable to technical errors. The remaining 3 were not caused by rotation of the arm and would probably not have been prevented by using the tubular-bandage sling. A posterior plaster splint combined with a simple sling is a simple and effective immobilisation method for SCHFs provided internal fixation is technically optimal. IV, retrospective observational study. Copyright © 2018. Published by Elsevier Masson SAS.

  14. Distributed Optimization of Sustainable Power Dispatch and Flexible Consumer Loads for Resilient Power Grid Operations

    NASA Astrophysics Data System (ADS)

    Srikantha, Pirathayini

    Today's electric grid is rapidly evolving to provision for heterogeneous system components (e.g. intermittent generation, electric vehicles, storage devices, etc.) while catering to diverse consumer power demand patterns. In order to accommodate this changing landscape, the widespread integration of cyber communication with physical components can be witnessed in all tenets of the modern power grid. This ubiquitous connectivity provides an elevated level of awareness and decision-making ability to system operators. Moreover, devices that were typically passive in the traditional grid are now `smarter' as these can respond to remote signals, learn about local conditions and even make their own actuation decisions if necessary. These advantages can be leveraged to reap unprecedented long-term benefits that include sustainable, efficient and economical power grid operations. Furthermore, challenges introduced by emerging trends in the grid such as high penetration of distributed energy sources, rising power demands, deregulations and cyber-security concerns due to vulnerabilities in standard communication protocols can be overcome by tapping onto the active nature of modern power grid components. In this thesis, distributed constructs in optimization and game theory are utilized to design the seamless real-time integration of a large number of heterogeneous power components such as distributed energy sources with highly fluctuating generation capacities and flexible power consumers with varying demand patterns to achieve optimal operations across multiple levels of hierarchy in the power grid. Specifically, advanced data acquisition, cloud analytics (such as prediction), control and storage systems are leveraged to promote sustainable and economical grid operations while ensuring that physical network, generation and consumer comfort requirements are met. Moreover, privacy and security considerations are incorporated into the core of the proposed designs and these

  15. Using genetic algorithms to determine near-optimal pricing, investment and operating strategies in the electric power industry

    NASA Astrophysics Data System (ADS)

    Wu, Dongjun

    Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.

  16. Optimizing a reconfigurable material via evolutionary computation

    NASA Astrophysics Data System (ADS)

    Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.

  17. Development and testing of a photometric method to identify non-operating solar hot water systems in field settings.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Hongbo; Vorobieff, Peter V.; Menicucci, David

    2012-06-01

    This report presents the results of experimental tests of a concept for using infrared (IR) photos to identify non-operational systems based on their glazing temperatures; operating systems have lower glazing temperatures than those in stagnation. In recent years thousands of new solar hot water (SHW) systems have been installed in some utility districts. As these numbers increase, concern is growing about the systems dependability because installation rebates are often based on the assumption that all of the SHW systems will perform flawlessly for a 20-year period. If SHW systems routinely fail prematurely, then the utilities will have overpaid for grid-energymore » reduction performance that is unrealized. Moreover, utilities are responsible for replacing energy for loads that failed SHW system were supplying. Thus, utilities are seeking data to quantify the reliability of SHW systems. The work described herein is intended to help meet this need. The details of the experiment are presented, including a description of the SHW collectors that were examined, the testbed that was used to control the system and record data, the IR camera that was employed, and the conditions in which testing was completed. The details of the associated analysis are presented, including direct examination of the video records of operational and stagnant collectors, as well as the development of a model to predict glazing temperatures and an analysis of temporal intermittency of the images, both of which are critical to properly adjusting the IR camera for optimal performance. Many IR images and a video are presented to show the contrast between operating and stagnant collectors. The major conclusion is that the technique has potential to be applied by using an aircraft fitted with an IR camera that can fly over an area with installed SHW systems, thus recording the images. Subsequent analysis of the images can determine the operational condition of the fielded collectors. Specific

  18. Multi-objective evolutionary optimization for the joint operation of reservoirs of water supply under water-food-energy nexus management

    NASA Astrophysics Data System (ADS)

    Uen, T. S.; Tsai, W. P.; Chang, F. J.; Huang, A.

    2016-12-01

    In recent years, urbanization had a great effect on the growth of population and the resource management scheme of water, food and energy nexus (WFE nexus) in Taiwan. Resource shortages of WFE become a long-term and thorny issue due to the complex interactions of WFE nexus. In consideration of rapid socio-economic development, it is imperative to explore an efficient and practical approach for WFE resources management. This study aims to search the optimal solution to WFE nexus and construct a stable water supply system for multiple stakeholders. The Shimen Reservoir and Feitsui Reservoir in northern Taiwan are chosen to conduct the joint operation of the two reservoirs for water supply. This study intends to achieve water resource allocation from the two reservoirs subject to different operating rules and restrictions of resource allocation. The multi-objectives of the joint operation aim at maximizing hydro-power synergistic gains while minimizing water supply deficiency as well as food shortages. We propose to build a multi-objective evolutionary optimization model for analyzing the hydro-power synergistic gains to suggest the most favorable solutions in terms of tradeoffs between WFE. First, this study collected data from two reservoirs and Taiwan power company. Next, we built a WFE nexus model based on system dynamics. Finally, this study optimized the joint operation of the two reservoirs and calculated the synergy of hydro-power generation. The proposed methodology can tackle the complex joint reservoir operation problems. Results can suggest a reliable policy for joint reservoir operation for creating a green economic city under the lowest risks of water supply.

  19. IEP (Individualized Educational Program) Co-operation between Optimal Support of Students with Special Needs

    NASA Astrophysics Data System (ADS)

    Ogoshi, Yasuhiro; Nakai, Akio; Ogoshi, Sakiko; Mitsuhashi, Yoshinori; Araki, Chikahiro

    A key aspect of the optimal support of students with special needs is co-ordination and co-operation between school, home and specialized agencies. Communication between these entities is of prime importance and can be facilitated through the use of a support system implementing ICF guidelines as outlined. This communication system can be considered to be a preventative rather than allopathic support.

  20. Knowledge Visualizations: A Tool to Achieve Optimized Operational Decision Making and Data Integration

    DTIC Science & Technology

    2015-06-01

    Hadoop Distributed File System (HDFS) without any integration with Accumulo-based Knowledge Stores based on OWL/RDF. 4. Cloud Based The Apache Software...BTW, 7(12), pp. 227–241. Godin, A. & Akins, D. (2014). Extending DCGS-N naval tactical clouds from in-storage to in-memory for the integrated fires...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis

  1. Optimal scan timing and intravenous route for contrast-enhanced computed tomography in patients after Fontan operation.

    PubMed

    Park, Eun-Ah; Lee, Whal; Chung, Se-Young; Yin, Yong Hu; Chung, Jin Wook; Park, Jae Hyung

    2010-01-01

    To determine the optimal scan timing and adequate intravenous route for patients having undergone the Fontan operation. A total of 88 computed tomographic images in 49 consecutive patients who underwent the Fontan operation were retrospectively evaluated and divided into 7 groups: group 1, bolus-tracking method with either intravenous route (n = 20); group 2, 1-minute-delay scan with single antecubital route (n = 36); group 3, 1-minute-delay scan with both antecubital routes (n = 2); group 4, 1-minute-delay scan with foot vein route (n = 3); group 5, 1-minute-delay scan with simultaneous infusion via both antecubital and foot vein routes (n = 2); group 6, 3-minute-delay scan with single antecubital route (n = 22); and group 7, 3-minute-delay scan with foot vein route (n = 3). The presence of beam-hardening artifact, uniform enhancement, and optimal enhancement was evaluated at the right pulmonary artery (RPA), left pulmonary artery (LPA), and Fontan tract. Optimal enhancement was determined when evaluation of thrombus was possible. Standard deviation was measured at the RPA, LPA, and Fontan tract. Beam-hardening artifacts of the RPA, LPA, and Fontan tract were frequently present in groups 1, 4, and 5. The success rate of uniform and optimal enhancement was highest (100%) in groups 6 and 7, followed by group 2 (75%). An SD of less than 30 Hounsfield unit for the pulmonary artery and Fontan tract was found in groups 3, 6, and 7. The optimal enhancement of the pulmonary arteries and Fontan tract can be achieved by a 3-minute-delay scan irrespective of the intravenous route location.

  2. Design optimization of MR-compatible rotating anode x-ray tubes for stable operation

    PubMed Central

    Shin, Mihye; Lillaney, Prasheel; Hinshaw, Waldo; Fahrig, Rebecca

    2013-01-01

    Purpose: Hybrid x-ray/MR systems can enhance the diagnosis and treatment of endovascular, cardiac, and neurologic disorders by using the complementary advantages of both modalities for image guidance during interventional procedures. Conventional rotating anode x-ray tubes fail near an MR imaging system, since MR fringe fields create eddy currents in the metal rotor which cause a reduction in the rotation speed of the x-ray tube motor. A new x-ray tube motor prototype has been designed and built to be operated close to a magnet. To ensure the stability and safety of the motor operation, dynamic characteristics must be analyzed to identify possible modes of mechanical failure. In this study a 3D finite element method (FEM) model was developed in order to explore possible modifications, and to optimize the motor design. The FEM provides a valuable tool that permits testing and evaluation using numerical simulation instead of building multiple prototypes. Methods: Two experimental approaches were used to measure resonance characteristics: the first obtained the angular speed curves of the x-ray tube motor employing an angle encoder; the second measured the power spectrum using a spectrum analyzer, in which the large amplitude of peaks indicates large vibrations. An estimate of the bearing stiffness is required to generate an accurate FEM model of motor operation. This stiffness depends on both the bearing geometry and adjacent structures (e.g., the number of balls, clearances, preload, etc.) in an assembly, and is therefore unknown. This parameter was set by matching the FEM results to measurements carried out with the anode attached to the motor, and verified by comparing FEM predictions and measurements with the anode removed. The validated FEM model was then used to sweep through design parameters [bearing stiffness (1×105–5×107 N/m), shaft diameter (0.372–0.625 in.), rotor diameter (2.4–2.9 in.), and total length of motor (5.66–7.36 in.)] to increase the

  3. Clinical prediction model to identify vulnerable patients in ambulatory surgery: towards optimal medical decision-making.

    PubMed

    Mijderwijk, Herjan; Stolker, Robert Jan; Duivenvoorden, Hugo J; Klimek, Markus; Steyerberg, Ewout W

    2016-09-01

    Ambulatory surgery patients are at risk of adverse psychological outcomes such as anxiety, aggression, fatigue, and depression. We developed and validated a clinical prediction model to identify patients who were vulnerable to these psychological outcome parameters. We prospectively assessed 383 mixed ambulatory surgery patients for psychological vulnerability, defined as the presence of anxiety (state/trait), aggression (state/trait), fatigue, and depression seven days after surgery. Three psychological vulnerability categories were considered-i.e., none, one, or multiple poor scores, defined as a score exceeding one standard deviation above the mean for each single outcome according to normative data. The following determinants were assessed preoperatively: sociodemographic (age, sex, level of education, employment status, marital status, having children, religion, nationality), medical (heart rate and body mass index), and psychological variables (self-esteem and self-efficacy), in addition to anxiety, aggression, fatigue, and depression. A prediction model was constructed using ordinal polytomous logistic regression analysis, and bootstrapping was applied for internal validation. The ordinal c-index (ORC) quantified the discriminative ability of the model, in addition to measures for overall model performance (Nagelkerke's R (2) ). In this population, 137 (36%) patients were identified as being psychologically vulnerable after surgery for at least one of the psychological outcomes. The most parsimonious and optimal prediction model combined sociodemographic variables (level of education, having children, and nationality) with psychological variables (trait anxiety, state/trait aggression, fatigue, and depression). Model performance was promising: R (2)  = 30% and ORC = 0.76 after correction for optimism. This study identified a substantial group of vulnerable patients in ambulatory surgery. The proposed clinical prediction model could allow healthcare

  4. Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy

    NASA Astrophysics Data System (ADS)

    Magee, T. M.; Clement, M. A.; Zagona, E. A.

    2012-12-01

    Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level

  5. The optimal operation of cooling tower systems with variable-frequency control

    NASA Astrophysics Data System (ADS)

    Cao, Yong; Huang, Liqing; Cui, Zhiguo; Liu, Jing

    2018-02-01

    This study investigates the energy performance of chiller and cooling tower systems integrated with variable-frequency control for cooling tower fans and condenser water pumps. With regard to an example chiller system serving an office building, Chiller and cooling towers models were developed to assess how different variable-frequency control methods of cooling towers fans and condenser water pumps influence the trade-off between the chiller power, pump power and fan power under various operating conditions. The matching relationship between the cooling tower fans frequency and condenser water pumps frequency at optimal energy consumption of the system is introduced to achieve optimum system performance.

  6. Study on optimization of the short-term operation of cascade hydropower stations by considering output error

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang

    2017-06-01

    The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.

  7. Integrated Arrival and Departure Schedule Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2014-01-01

    In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.

  8. Multiobjective optimization of urban water resources: Moving toward more practical solutions

    NASA Astrophysics Data System (ADS)

    Mortazavi, Mohammad; Kuczera, George; Cui, Lijie

    2012-03-01

    The issue of drought security is of paramount importance for cities located in regions subject to severe prolonged droughts. The prospect of "running out of water" for an extended period would threaten the very existence of the city. Managing drought security for an urban water supply is a complex task involving trade-offs between conflicting objectives. In this paper a multiobjective optimization approach for urban water resource planning and operation is developed to overcome practically significant shortcomings identified in previous work. A case study based on the headworks system for Sydney (Australia) demonstrates the approach and highlights the potentially serious shortcomings of Pareto optimal solutions conditioned on short climate records, incomplete decision spaces, and constraints to which system response is sensitive. Where high levels of drought security are required, optimal solutions conditioned on short climate records are flawed. Our approach addresses drought security explicitly by identifying approximate optimal solutions in which the system does not "run dry" in severe droughts with expected return periods up to a nominated (typically large) value. In addition, it is shown that failure to optimize the full mix of interacting operational and infrastructure decisions and to explore the trade-offs associated with sensitive constraints can lead to significantly more costly solutions.

  9. Performance of Optimization Heuristics for the Operational Planning of Multi-energy Storage Systems

    NASA Astrophysics Data System (ADS)

    Haas, J.; Schradi, J.; Nowak, W.

    2016-12-01

    In the transition to low-carbon energy sources, energy storage systems (ESS) will play an increasingly important role. Particularly in the context of solar power challenges (variability, uncertainty), ESS can provide valuable services: energy shifting, ramping, robustness against forecast errors, frequency support, etc. However, these qualities are rarely modelled in the operational planning of power systems because of the involved computational burden, especially when multiple ESS technologies are involved. This work assesses two optimization heuristics for speeding up the optimal operation problem. It compares their accuracy (in terms of costs) and speed against a reference solution. The first heuristic (H1) is based on a merit order. Here, the ESS are sorted from lower to higher operational costs (including cycling costs). For each time step, the cheapest available ESS is used first, followed by the second one and so on, until matching the net load (demand minus available renewable generation). The second heuristic (H2) uses the Fourier transform to detect the main frequencies that compose the net load. A specific ESS is assigned to each frequency range, aiming to smoothen the net load. Finally, the reference solution is obtained with a mixed integer linear program (MILP). H1, H2 and MILP are subject to technical constraints (energy/power balance, ramping rates, on/off states...). Costs due to operation, replacement (cycling) and unserved energy are considered. Four typical days of a system with a high share of solar energy were used in several test cases, varying the resolution from one second to fifteen minutes. H1 and H2 achieve accuracies of about 90% and 95% in average, and speed-up times of two to three and one to two orders of magnitude, respectively. The use of the heuristics looks promising in the context of planning the expansion of power systems, especially when their loss of accuracy is outweighed by solar or wind forecast errors.

  10. Stability assessment and operating parameter optimization on experimental results in very small plasma focus, using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Jafari, Hossein; Habibi, Morteza

    2018-04-01

    Regarding the importance of stability in small-scale plasma focus devices for producing the repeatable and strength pinching, a sensitivity analysis approach has been used for applicability in design parameters optimization of an actually very low energy device (84 nF, 48 nH, 8-9.5 kV, ∼2.7-3.7 J). To optimize the devices functional specification, four different coaxial electrode configurations have been studied, scanning an argon gas pressure range from 0.6 to 1.5 mbar via the charging voltage variation study from 8.3 to 9.3 kV. The strength and efficient pinching was observed for the tapered anode configuration, over an expanded operating pressure range of 0.6 to 1.5 mbar. The analysis results showed that the most sensitive of the pinch voltage was associated with 0.88 ± 0.8mbar argon gas pressure and 8.3-8.5 kV charging voltage, respectively, as the optimum operating parameters. From the viewpoint of stability assessment of the device, it was observed that the least variation in stable operation of the device was for a charging voltage range of 8.3 to 8.7 kV in an operating pressure range from 0.6 to 1.1 mbar.

  11. Characterizing and Optimizing Photocathode Laser Distributions for Ultra-low Emittance Electron Beam Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, F.; Bohler, D.; Ding, Y.

    2015-12-07

    Photocathode RF gun has been widely used for generation of high-brightness electron beams for many different applications. We found that the drive laser distributions in such RF guns play important roles in minimizing the electron beam emittance. Characterizing the laser distributions with measurable parameters and optimizing beam emittance versus the laser distribution parameters in both spatial and temporal directions are highly desired for high-brightness electron beam operation. In this paper, we report systematic measurements and simulations of emittance dependence on the measurable parameters represented for spatial and temporal laser distributions at the photocathode RF gun systems of Linac Coherent Lightmore » Source. The tolerable parameter ranges for photocathode drive laser distributions in both directions are presented for ultra-low emittance beam operations.« less

  12. Parametric design criteria of an updated thermoradiative cell operating at optimal states

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Peng, Wanli; Lin, Jian; Chen, Xiaohang; Chen, Jincan

    2017-11-01

    An updated mode of the thermoradiative cell (TRC) with sub-band gap and non-radiative losses is proposed, which can efficiently harvest moderate-temperature heat energy and convert a part of heat into electricity. It is found that when the TRC is operated between the heat source at 800 K and the environment at 300 K , its maximum power output density and efficiency can attain 1490 W m-2 and 27.2 % , respectively. Moreover, the effects of some key parameters including the band gap and voltage output on the performance of the TRC are discussed. The optimally working regions of the power density, efficiency, band gap, and voltage output are determined. The maximum efficiency and power output density of the TRC operated at different temperatures are calculated and compared with those of thermophotovoltaic cells (TPVCs) and thermionic energy converters (TECs), and consequently, it is revealed that the maximum efficiency of the TRC operated at the moderate-temperature range is much higher than that of the TEC or the TPVC and the maximum power output density of the TRC is larger than that of the TEC but smaller than that of the TPVC. Particularly, the TRC is manufactured more easily than the near-field TPVC possessing a nanoscale vacuum gap. The results obtained will be helpful for engineers to choose the semiconductor materials, design and manufacture TRCs, and control operative conditions.

  13. XY vs X Mixer in Quantum Alternating Operator Ansatz for Optimization Problems with Constraints

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Rubin, Nicholas; Rieffel, Eleanor G.

    2018-01-01

    Quantum Approximate Optimization Algorithm, further generalized as Quantum Alternating Operator Ansatz (QAOA), is a family of algorithms for combinatorial optimization problems. It is a leading candidate to run on emerging universal quantum computers to gain insight into quantum heuristics. In constrained optimization, penalties are often introduced so that the ground state of the cost Hamiltonian encodes the solution (a standard practice in quantum annealing). An alternative is to choose a mixing Hamiltonian such that the constraint corresponds to a constant of motion and the quantum evolution stays in the feasible subspace. Better performance of the algorithm is speculated due to a much smaller search space. We consider problems with a constant Hamming weight as the constraint. We also compare different methods of generating the generalized W-state, which serves as a natural initial state for the Hamming-weight constraint. Using graph-coloring as an example, we compare the performance of using XY model as a mixer that preserves the Hamming weight with the performance of adding a penalty term in the cost Hamiltonian.

  14. Intelligent reservoir operation system based on evolving artificial neural networks

    NASA Astrophysics Data System (ADS)

    Chaves, Paulo; Chang, Fi-John

    2008-06-01

    We propose a novel intelligent reservoir operation system based on an evolving artificial neural network (ANN). Evolving means the parameters of the ANN model are identified by the GA evolutionary optimization technique. Accordingly, the ANN model should represent the operational strategies of reservoir operation. The main advantages of the Evolving ANN Intelligent System (ENNIS) are as follows: (i) only a small number of parameters to be optimized even for long optimization horizons, (ii) easy to handle multiple decision variables, and (iii) the straightforward combination of the operation model with other prediction models. The developed intelligent system was applied to the operation of the Shihmen Reservoir in North Taiwan, to investigate its applicability and practicability. The proposed method is first built to a simple formulation for the operation of the Shihmen Reservoir, with single objective and single decision. Its results were compared to those obtained by dynamic programming. The constructed network proved to be a good operational strategy. The method was then built and applied to the reservoir with multiple (five) decision variables. The results demonstrated that the developed evolving neural networks improved the operation performance of the reservoir when compared to its current operational strategy. The system was capable of successfully simultaneously handling various decision variables and provided reasonable and suitable decisions.

  15. An effective model for ergonomic optimization applied to a new automotive assembly line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less

  16. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  17. Optimal Design and Operation of In-Situ Chemical Oxidation Using Stochastic Cost Optimization Toolkit

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.; Borden, R. C.

    2014-12-01

    In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO

  18. Optimization of Design Parameters and Operating Conditions of Electrochemical Capacitors for High Energy and Power Performance

    NASA Astrophysics Data System (ADS)

    Ike, Innocent S.; Sigalas, Iakovos; Iyuke, Sunny E.

    2017-03-01

    Theoretical expressions for performance parameters of different electrochemical capacitors (ECs) have been optimized by solving them using MATLAB scripts as well as via the MATLAB R2014a optimization toolbox. The performance of the different kinds of ECs under given conditions was compared using theoretical equations and simulations of various models based on the conditions of device components, using optimal values for the coefficient associated with the battery-kind material ( K BMopt) and the constant associated with the electrolyte material ( K Eopt), as well as our symmetric electric double-layer capacitor (EDLC) experimental data. Estimation of performance parameters was possible based on values for the mass ratio of electrodes, operating potential range ratio, and specific capacitance of electrolyte. The performance of asymmetric ECs with suitable electrode mass and operating potential range ratios using aqueous or organic electrolyte at appropriate operating potential range and specific capacitance was 2.2 and 5.56 times greater, respectively, than for the symmetric EDLC and asymmetric EC using the same aqueous electrolyte, respectively. This enhancement was accompanied by reduced cell mass and volume. Also, the storable and deliverable energies of the asymmetric EC with suitable electrode mass and operating potential range ratios using the proper organic electrolyte were 12.9 times greater than those of the symmetric EDLC using aqueous electrolyte, again with reduced cell mass and volume. The storable energy, energy density, and power density of the asymmetric EDLC with suitable electrode mass and operating potential range ratios using the proper organic electrolyte were 5.56 times higher than for a similar symmetric EDLC using aqueous electrolyte, with cell mass and volume reduced by a factor of 1.77. Also, the asymmetric EDLC with the same type of electrode and suitable electrode mass ratio, working potential range ratio, and proper organic electrolyte

  19. Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles

    NASA Astrophysics Data System (ADS)

    Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.

    2016-03-01

    A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.

  20. Magnetostatic focal spot correction for x-ray tubes operating in strong magnetic fields using iterative optimization

    PubMed Central

    Lillaney, Prasheel; Shin, Mihye; Conolly, Steven M.; Fahrig, Rebecca

    2012-01-01

    Purpose: Combining x-ray fluoroscopy and MR imaging systems for guidance of interventional procedures has become more commonplace. By designing an x-ray tube that is immune to the magnetic fields outside of the MR bore, the two systems can be placed in close proximity to each other. A major obstacle to robust x-ray tube design is correcting for the effects of the magnetic fields on the x-ray tube focal spot. A potential solution is to design active shielding that locally cancels the magnetic fields near the focal spot. Methods: An iterative optimization algorithm is implemented to design resistive active shielding coils that will be placed outside the x-ray tube insert. The optimization procedure attempts to minimize the power consumption of the shielding coils while satisfying magnetic field homogeneity constraints. The algorithm is composed of a linear programming step and a nonlinear programming step that are interleaved with each other. The coil results are verified using a finite element space charge simulation of the electron beam inside the x-ray tube. To alleviate heating concerns an optimized coil solution is derived that includes a neodymium permanent magnet. Any demagnetization of the permanent magnet is calculated prior to solving for the optimized coils. The temperature dynamics of the coil solutions are calculated using a lumped parameter model, which is used to estimate operation times of the coils before temperature failure. Results: For a magnetic field strength of 88 mT, the algorithm solves for coils that consume 588 A/cm2. This specific coil geometry can operate for 15 min continuously before reaching temperature failure. By including a neodymium magnet in the design the current density drops to 337 A/cm2, which increases the operation time to 59 min. Space charge simulations verify that the coil designs are effective, but for oblique x-ray tube geometries there is still distortion of the focal spot shape along with deflections of approximately

  1. Network Virtualization - Opportunities and Challenges for Operators

    NASA Astrophysics Data System (ADS)

    Carapinha, Jorge; Feil, Peter; Weissmann, Paul; Thorsteinsson, Saemundur E.; Etemoğlu, Çağrı; Ingþórsson, Ólafur; Çiftçi, Selami; Melo, Márcio

    In the last few years, the concept of network virtualization has gained a lot of attention both from industry and research projects. This paper evaluates the potential of network virtualization from an operator's perspective, with the short-term goal of optimizing service delivery and rollout, and on a longer term as an enabler of technology integration and migration. Based on possible scenarios for implementing and using network virtualization, new business roles and models are examined. Open issues and topics for further evaluation are identified. In summary, the objective is to identify the challenges but also new opportunities for telecom operators raised by network virtualization.

  2. 41 CFR 102-85.175 - Are the standard level services for cleaning, mechanical operation, and maintenance identified in...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... services for cleaning, mechanical operation, and maintenance identified in an OA? 102-85.175 Section 102-85... of Service § 102-85.175 Are the standard level services for cleaning, mechanical operation, and..., mechanical operation, and maintenance shall be provided in accordance with the GSA standard level of services...

  3. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  4. Optimization of USMC Hornet Inventory

    DTIC Science & Technology

    2016-06-01

    maintenance activities while adhering to the required number of aircraft for 22 operational use. He introduced an optimization based on an ILP... operational requirements across the entire planning process. In dealing with tail assignment as an optimization problem instead of a feasibility...aircraft and the goal is to minimize the penalties associated with failing to meet operational requirements. This research focuses on the optimal

  5. Optimization of operator and physical parameters for laser welding of dental materials.

    PubMed

    Bertrand, C; le Petitcorps, Y; Albingre, L; Dupuis, V

    2004-04-10

    Interactions between lasers and materials are very complex phenomena. The success of laser welding procedures in dental metals depends on the operator's control of many parameters. The aims of this study were to evaluate factors relating to the operator's dexterity and the choice of the welding parameters (power, pulse duration and therefore energy), which are recognized determinants of weld quality. In vitro laboratory study. FeNiCr dental drawn wires were chosen for these experiments because their properties are well known. Different diameters of wires were laser welded, then tested in tension and compared to the control material as extruded, in order to evaluate the quality of the welding. Scanning electron microscopy of the fractured zone and micrograph observations perpendicular and parallel to the wire axis were also conducted in order to analyse the depth penetration and the quality of the microstructure. Additionally, the micro-hardness (Vickers type) was measured both in the welded and the heat-affected zones and then compared to the non-welded alloy. Adequate combination of energy and pulse duration with the power set in the range between 0.8 to 1 kW appears to improve penetration depth of the laser beam and success of the welding procedure. Operator skill is also an important variable. The variation in laser weld quality in dental FeNiCr wires attributable to operator skill can be minimized by optimization of the physical welding parameters.

  6. Diagnostic Utility of the Posttraumatic Stress Disorder (PTSD) Checklist for Identifying Full and Partial PTSD in Active-Duty Military.

    PubMed

    Dickstein, Benjamin D; Weathers, Frank W; Angkaw, Abigail C; Nievergelt, Caroline M; Yurgil, Kate; Nash, William P; Baker, Dewleen G; Litz, Brett T

    2015-06-01

    The aim of this study was to determine optimally efficient cutoff scores on the Posttraumatic Stress Disorder Checklist (PCL) for identifying full posttraumatic stress disorder (PTSD) and partial PTSD (P-PTSD) in active-duty Marines and Sailors. Participants were 1,016 Marines and Sailors who were administered the PCL and Clinician-Administered PTSD Scale (CAPS) 3 months after returning from Operations Iraqi and Enduring Freedom. PCL cutoffs were tested against three CAPS-based classifications: full PTSD, stringent P-PTSD, and lenient P-PTSD. A PCL score of 39 was found to be optimally efficient for identifying full PTSD. Scores of 38 and 33 were found to be optimally efficient for identifying stringent and lenient P-PTSD, respectively. Findings suggest that the PCL cutoff that is optimally efficient for detecting PTSD in active-duty Marines and Sailors is substantially lower than the score of 50 commonly used by researchers. In addition, findings provide scores useful for identifying P-PTSD in returning service members. © The Author(s) 2014.

  7. Multicriteria optimization approach to design and operation of district heating supply system over its life cycle

    NASA Astrophysics Data System (ADS)

    Hirsch, Piotr; Duzinkiewicz, Kazimierz; Grochowski, Michał

    2017-11-01

    District Heating (DH) systems are commonly supplied using local heat sources. Nowadays, modern insulation materials allow for effective and economically viable heat transportation over long distances (over 20 km). In the paper a method for optimized selection of design and operating parameters of long distance Heat Transportation System (HTS) is proposed. The method allows for evaluation of feasibility and effectivity of heat transportation from the considered heat sources. The optimized selection is formulated as multicriteria decision-making problem. The constraints for this problem include a static HTS model, allowing considerations of system life cycle, time variability and spatial topology. Thereby, variation of heat demand and ground temperature within the DH area, insulation and pipe aging and/or terrain elevation profile are taken into account in the decision-making process. The HTS construction costs, pumping power, and heat losses are considered as objective functions. Inner pipe diameter, insulation thickness, temperatures and pumping stations locations are optimized during the decision-making process. Moreover, the variants of pipe-laying e.g. one pipeline with the larger diameter or two with the smaller might be considered during the optimization. The analyzed optimization problem is multicriteria, hybrid and nonlinear. Because of such problem properties, the genetic solver was applied.

  8. A bottom-up robust optimization framework for identifying river basin development pathways under deep climate uncertainty

    NASA Astrophysics Data System (ADS)

    Taner, M. U.; Ray, P.; Brown, C.

    2016-12-01

    Hydroclimatic nonstationarity due to climate change poses challenges for long-term water infrastructure planning in river basin systems. While designing strategies that are flexible or adaptive hold intuitive appeal, development of well-performing strategies requires rigorous quantitative analysis that address uncertainties directly while making the best use of scientific information on the expected evolution of future climate. Multi-stage robust optimization (RO) offers a potentially effective and efficient technique for addressing the problem of staged basin-level planning under climate change, however the necessity of assigning probabilities to future climate states or scenarios is an obstacle to implementation, given that methods to reliably assign probabilities to future climate states are not well developed. We present a method that overcomes this challenge by creating a bottom-up RO-based framework that decreases the dependency on probability distributions of future climate and rather employs them after optimization to aid selection amongst competing alternatives. The iterative process yields a vector of `optimal' decision pathways each under the associated set of probabilistic assumptions. In the final phase, the vector of optimal decision pathways is evaluated to identify the solutions that are least sensitive to the scenario probabilities and are most-likely conditional on the climate information. The framework is illustrated for the planning of new dam and hydro-agricultural expansions projects in the Niger River Basin over a 45-year planning period from 2015 to 2060.

  9. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  10. Operation bandwidth optimization of photonic differentiators.

    PubMed

    Yan, Siqi; Zhang, Yong; Dong, Jianji; Zheng, Aoling; Liao, Shasha; Zhou, Hailong; Wu, Zhao; Xia, Jinsong; Zhang, Xinliang

    2015-07-27

    We theoretically investigate the operation bandwidth limitation of the photonic differentiator including the upper limitation, which is restrained by the device operation bandwidth and the lower limitation, which is restrained by the energy efficiency (EE) and detecting noise level. Taking the silicon photonic crystal L3 nano-cavity (PCN) as an example, for the first time, we experimentally demonstrate that the lower limitation of the operation bandwidth does exist and differentiators with different bandwidths have significantly different acceptable pulse width range of input signals, which are consistent to the theoretical prediction. Furthermore, we put forward a novel photonic differentiator scheme employing cascaded PCNs with different Q factors, which is likely to expand the operation bandwidth range of photonic differentiator dramatically.

  11. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  12. Improving striping operations through system optimization.

    DOT National Transportation Integrated Search

    2015-09-01

    Striping operations generate a significant workload for Missouri Department of Transportation (MoDOT) maintenance : operations. The requirement for each striping crew to replenish its stock of paint and other consumable items from a bulk storage : fa...

  13. Optimizing Aesthetic Outcomes in Delayed Breast Reconstruction

    PubMed Central

    2017-01-01

    Background: The need to restore both the missing breast volume and breast surface area makes achieving excellent aesthetic outcomes in delayed breast reconstruction especially challenging. Autologous breast reconstruction can be used to achieve both goals. The aim of this study was to identify surgical maneuvers that can optimize aesthetic outcomes in delayed breast reconstruction. Methods: This is a retrospective review of operative and clinical records of all patients who underwent unilateral or bilateral delayed breast reconstruction with autologous tissue between April 2014 and January 2017. Three groups of delayed breast reconstruction patients were identified based on patient characteristics. Results: A total of 26 flaps were successfully performed in 17 patients. Key surgical maneuvers for achieving aesthetically optimal results were identified. A statistically significant difference for volume requirements was identified in cases where a delayed breast reconstruction and a contralateral immediate breast reconstruction were performed simultaneously. Conclusions: Optimal aesthetic results can be achieved with: (1) restoration of breast skin envelope with tissue expansion when possible, (2) optimal positioning of a small skin paddle to be later incorporated entirely into a nipple areola reconstruction when adequate breast skin surface area is present, (3) limiting the reconstructed breast mound to 2 skin tones when large area skin resurfacing is required, (4) increasing breast volume by deepithelializing, not discarding, the inferior mastectomy flap skin, (5) eccentric division of abdominal flaps when an immediate and delayed bilateral breast reconstructions are performed simultaneously; and (6) performing second-stage breast reconstruction revisions and fat grafting. PMID:28894666

  14. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  15. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  16. Modeling Total Dissolved Gas for Optimal Operation of Multireservoir Systems

    DOE PAGES

    Politano, Marcela; Castro, Alejandro; Hadjerioua, Boualem

    2017-02-09

    One important environmental issue of hydropower in the Columbia and Snake River Basins (Pacific Northwest region of United States) is elevated total dissolved gas (TDG) downstream of a dam, which has the potential to cause gas bubble disease in affected fish. Gas supersaturation in the Columbia River Basin primarily occurs due to dissolution of bubbles entrained during spill events. This paper presents a physically based TDG model that can be used to optimize spill operations in multireservoir hydropower systems. Independent variables of the model are forebay TDG, tailwater elevation, spillway and powerhouse discharges, project head, and environmental parameters such asmore » temperature and atmospheric pressure. The model contains seven physically meaningful experimental parameters, which were calibrated and validated against TDG data collected downstream of Rock Island Dam (Washington) from 2008 to 2012. In conclusion, a sensitivity analysis was performed to increase the understanding of the relationships between TDG downstream of the dam and processes such as air entrainment, lateral powerhouse flow, and dissolution.« less

  17. Flight plan optimization

    NASA Astrophysics Data System (ADS)

    Dharmaseelan, Anoop; Adistambha, Keyne D.

    2015-05-01

    Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

  18. Performance, stability and operation voltage optimization of screen-printed aqueous supercapacitors

    PubMed Central

    Lehtimäki, Suvi; Railanmaa, Anna; Keskinen, Jari; Kujala, Manu; Tuukkanen, Sampo; Lupo, Donald

    2017-01-01

    Harvesting micropower energy from the ambient environment requires an intermediate energy storage, for which printed aqueous supercapacitors are well suited due to their low cost and environmental friendliness. In this work, a systematic study of a large set of devices is used to investigate the effect of process variability and operating voltage on the performance and stability of screen printed aqueous supercapacitors. The current collectors and active layers are printed with graphite and activated carbon inks, respectively, and aqueous NaCl used as the electrolyte. The devices are characterized through galvanostatic discharge measurements for quantitative determination of capacitance and equivalent series resistance (ESR), as well as impedance spectroscopy for a detailed study of the factors contributing to ESR. The capacitances are 200–360 mF and the ESRs 7.9–12.7 Ω, depending on the layer thicknesses. The ESR is found to be dominated by the resistance of the graphite current collectors and is compatible with applications in low-power distributed electronics. The effects of different operating voltages on the capacitance, leakage and aging rate of the supercapacitors are tested, and 1.0 V found to be the optimal choice for using the devices in energy harvesting applications. PMID:28382962

  19. Combustion Efficiency, Flameout Operability Limits and General Design Optimization for Integrated Ramjet-Scramjet Hypersonic Vehicles

    NASA Astrophysics Data System (ADS)

    Mbagwu, Chukwuka Chijindu

    High speed, air-breathing hypersonic vehicles encounter a varied range of engine and operating conditions traveling along cruise/ascent missions at high altitudes and dynamic pressures. Variations of ambient pressure, temperature, Mach number, and dynamic pressure can affect the combustion conditions in conflicting ways. Computations were performed to understand propulsion tradeoffs that occur when a hypersonic vehicle travels along an ascent trajectory. Proper Orthogonal Decomposition methods were applied for the reduction of flamelet chemistry data in an improved combustor model. Two operability limits are set by requirements that combustion efficiency exceed selected minima and flameout be avoided. A method for flameout prediction based on empirical Damkohler number measurements is presented. Operability limits are plotted that define allowable flight corridors on an altitude versus flight Mach number performance map; fixed-acceleration ascent trajectories were considered for this study. Several design rules are also presented for a hypersonic waverider with a dual-mode scramjet engine. Focus is placed on ''vehicle integration" design, differing from previous ''propulsion-oriented" design optimization. The well-designed waverider falls between that of an aircraft (high lift-to-drag ratio) and a rocket (high thrust-to-drag ratio). 84 variations of an X-43-like vehicle were run using the MASIV scramjet reduced order model to examine performance tradeoffs. Informed by the vehicle design study, variable-acceleration trajectory optimization was performed for three constant dynamic pressures ascents. Computed flameout operability limits were implemented as additional constraints to the optimization problem. The Michigan-AFRL Scramjet In-Vehicle (MASIV) waverider model includes finite-rate chemistry, applied scaling laws for 3-D turbulent mixing, ram-scram transition and an empirical value of the flameout Damkohler number. A reduced-order modeling approach is justified

  20. Screening and prioritisation of chemical risks from metal mining operations, identifying exposure media of concern.

    PubMed

    Pan, Jilang; Oates, Christopher J; Ihlenfeld, Christian; Plant, Jane A; Voulvoulis, Nikolaos

    2010-04-01

    Metals have been central to the development of human civilisation from the Bronze Age to modern times, although in the past, metal mining and smelting have been the cause of serious environmental pollution with the potential to harm human health. Despite problems from artisanal mining in some developing countries, modern mining to Western standards now uses the best available mining technology combined with environmental monitoring, mitigation and remediation measures to limit emissions to the environment. This paper develops risk screening and prioritisation methods previously used for contaminated land on military and civilian sites and engineering systems for the analysis and prioritisation of chemical risks from modern metal mining operations. It uses hierarchical holographic modelling and multi-criteria decision making to analyse and prioritise the risks from potentially hazardous inorganic chemical substances released by mining operations. A case study of an active platinum group metals mine in South Africa is used to demonstrate the potential of the method. This risk-based methodology for identifying, filtering and ranking mining-related environmental and human health risks can be used to identify exposure media of greatest concern to inform risk management. It also provides a practical decision-making tool for mine acquisition and helps to communicate risk to all members of mining operation teams.

  1. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  2. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  3. [Early activation of heart-operated patients as a tool for optimization of cardio-surgery curation (review)].

    PubMed

    2014-04-01

    During last years in foreign countries there was widely introduced tactic of early activation of cardio-surgery patients. Necessary components of this methodical approach are early finishing of post-operation artificial respiration and extubation of trachea, shortening of time spending in intensive therapy till 1 day and sign out from stationary after 5 days. As a result of reducing hospitalization period, the curation costs are reduced significantly. Goal of this research was the analysis of methods of anesthesia that allow early extubation and activation after cardio-surgery interventions. There were analyzed data of protocols of anesthesia and post-operation periods for 270 patients. It was concluded that applied methods of anesthesia ensure adequate protection from operation stress and allow reduce time of post-operation artificial respiration, early activation of patients without reducing level of their safety. It was also proved that application of any type of anesthesia medicines is not influencing the temp of post-operation activation. Conducted research is proving the advisability of using tactic of early activation of patients after heart operations and considers this as a tool for optimization of cardio-surgery curation.

  4. State-of-The-Art of Modeling Methodologies and Optimization Operations in Integrated Energy System

    NASA Astrophysics Data System (ADS)

    Zheng, Zhan; Zhang, Yongjun

    2017-08-01

    Rapid advances in low carbon technologies and smart energy communities are reshaping future patterns. Uncertainty in energy productions and demand sides are paving the way towards decentralization management. Current energy infrastructures could not meet with supply and consumption challenges, along with emerging environment and economic requirements. Integrated Energy System(IES) whereby electric power, natural gas, heating couples with each other demonstrates that such a significant technique would gradually become one of main comprehensive and optimal energy solutions with high flexibility, friendly renewables absorption and improving efficiency. In these global energy trends, we summarize this literature review. Firstly the accurate definition and characteristics of IES have been presented. Energy subsystem and coupling elements modeling issues are analyzed. It is pointed out that decomposed and integrated analysis methods are the key algorithms for IES optimization operations problems, followed by exploring the IES market mechanisms. Finally several future research tendencies of IES, such as dynamic modeling, peer-to-peer trading, couple market design, sare under discussion.

  5. A Stochastic Dynamic Programming Model With Fuzzy Storage States Applied to Reservoir Operation Optimization

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas

    2004-08-01

    Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.

  6. Swainsonine, a novel fungal metabolite: optimization of fermentative production and bioreactor operations using evolutionary programming.

    PubMed

    Singh, Digar; Kaur, Gurvinder

    2014-08-01

    The optimization of bioreactor operations towards swainsonine production was performed using an artificial neural network coupled evolutionary program (EP)-based optimization algorithm fitted with experimental one-factor-at-a-time (OFAT) results. The effects of varying agitation (300-500 rpm) and aeration (0.5-2.0 vvm) rates for different incubation hours (72-108 h) were evaluated in bench top bioreactor. Prominent scale-up parameters, gassed power per unit volume (P g/V L, W/m(3)) and volumetric oxygen mass transfer coefficient (K L a, s(-1)) were correlated with optimized conditions. A maximum of 6.59 ± 0.10 μg/mL of swainsonine production was observed at 400 rpm-1.5 vvm at 84 h in OFAT experiments with corresponding P g/VL and K L a values of 91.66 W/m(3) and 341.48 × 10(-4) s(-1), respectively. The EP optimization algorithm predicted a maximum of 10.08 μg/mL of swainsonine at 325.47 rpm, 1.99 vvm and 80.75 h against the experimental production of 7.93 ± 0.52 μg/mL at constant K L a (349.25 × 10(-4) s(-1)) and significantly reduced P g/V L (33.33 W/m(3)) drawn by the impellers.

  7. Identifying On-Orbit Test Targets for Space Fence Operational Testing

    NASA Astrophysics Data System (ADS)

    Pechkis, D.; Pacheco, N.; Botting, T.

    2014-09-01

    Space Fence will be an integrated system of two ground-based, S-band (2 to 4 GHz) phased-array radars located in Kwajalein and perhaps Western Australia [1]. Space Fence will cooperate with other Space Surveillance Network sensors to provide space object tracking and radar characterization data to support U.S. Strategic Command space object catalog maintenance and other space situational awareness needs. We present a rigorous statistical test design intended to test Space Fence to the letter of the program requirements as well as to characterize the system performance across the entire operational envelope. The design uses altitude, size, and inclination as independent factors in statistical tests of dependent variables (e.g., observation accuracy) linked to requirements. The analysis derives the type and number of necessary test targets. Comparing the resulting sample sizes with the number of currently known targets, we identify those areas where modelling and simulation methods are needed. Assuming hypothetical Kwajalein radar coverage and a conservative number of radar passes per object per day, we conclude that tests involving real-world space objects should take no more than 25 days to evaluate all operational requirements; almost 60 percent of the requirements can be tested in a single day and nearly 90 percent can be tested in one week or less. Reference: [1] L. Haines and P. Phu, Space Fence PDR Concept Development Phase, 2011 AMOS Conference Technical Papers.

  8. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms

    PubMed Central

    Hernández-Ocaña, Betania; Pozos-Parra, Ma. Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156

  9. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms.

    PubMed

    Hernández-Ocaña, Betania; Pozos-Parra, Ma Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem.

  10. Unit operation optimization for the manufacturing of botanical injections using a design space approach: a case study of water precipitation.

    PubMed

    Gong, Xingchu; Chen, Huali; Chen, Teng; Qu, Haibin

    2014-01-01

    Quality by design (QbD) concept is a paradigm for the improvement of botanical injection quality control. In this work, water precipitation process for the manufacturing of Xueshuantong injection, a botanical injection made from Notoginseng Radix et Rhizoma, was optimized using a design space approach as a sample. Saponin recovery and total saponin purity (TSP) in supernatant were identified as the critical quality attributes (CQAs) of water precipitation using a risk assessment for all the processes of Xueshuantong injection. An Ishikawa diagram and experiments of fractional factorial design were applied to determine critical process parameters (CPPs). Dry matter content of concentrated extract (DMCC), amount of water added (AWA), and stirring speed (SS) were identified as CPPs. Box-Behnken designed experiments were carried out to develop models between CPPs and process CQAs. Determination coefficients were higher than 0.86 for all the models. High TSP in supernatant can be obtained when DMCC is low and SS is high. Saponin recoveries decreased as DMCC increased. Incomplete collection of supernatant was the main reason for the loss of saponins. Design space was calculated using a Monte-Carlo simulation method with acceptable probability of 0.90. Recommended normal operation region are located in DMCC of 0.38-0.41 g/g, AWA of 3.7-4.9 g/g, and SS of 280-350 rpm, with a probability more than 0.919 to attain CQA criteria. Verification experiment results showed that operating DMCC, SS, and AWA within design space can attain CQA criteria with high probability.

  11. A fast algorithm for identifying friends-of-friends halos

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  12. Optimizing operating parameters of a honeycomb zeolite rotor concentrator for processing TFT-LCD volatile organic compounds with competitive adsorption characteristics.

    PubMed

    Lin, Yu-Chih; Chang, Feng-Tang

    2009-05-30

    In this study, we attempted to enhance the removal efficiency of a honeycomb zeolite rotor concentrator (HZRC), operated at optimal parameters, for processing TFT-LCD volatile organic compounds (VOCs) with competitive adsorption characteristics. The results indicated that when the HZRC processed a VOCs stream of mixed compounds, compounds with a high boiling point take precedence in the adsorption process. In addition, existing compounds with a low boiling point adsorbed onto the HZRC were also displaced by the high-boiling-point compounds. In order to achieve optimal operating parameters for high VOCs removal efficiency, results suggested controlling the inlet velocity to <1.5m/s, reducing the concentration ratio to 8 times, increasing the desorption temperature to 200-225 degrees C, and setting the rotation speed to 6.5rpm.

  13. Implementation of Canny and Isotropic Operator with Power Law Transformation to Identify Cervical Cancer

    NASA Astrophysics Data System (ADS)

    Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.

    2018-03-01

    Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.

  14. Premium cost optimization of operational and maintenance of green building in Indonesia using life cycle assessment method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Budiman, Rachmat; Riswanto

    2017-06-01

    Building has a big impact on the environmental developments. There are three general motives in building, namely the economy, society, and environment. Total completed building construction in Indonesia increased by 116% during 2009 to 2011. It made the energy consumption increased by 11% within the last three years. In fact, 70% of energy consumption is used for electricity needs on commercial buildings which leads to an increase of greenhouse gas emissions by 25%. Green Building cycle costs is known as highly building upfront cost in Indonesia. The purpose of optimization in this research improves building performance with some of green concept alternatives. Research methodology is mixed method of qualitative and quantitative approaches through questionnaire surveys and case study. Assessing the successful of optimization functions in the existing green building is based on the operational and maintenance phase with the Life Cycle Assessment Method. Choosing optimization results were based on the largest efficiency of building life cycle and the most effective cost to refund.

  15. Mixed integer simulation optimization for optimal hydraulic fracturing and production of shale gas fields

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Gong, B.; Wang, H. G.

    2016-08-01

    Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.

  16. Online total organic carbon (TOC) monitoring for water and wastewater treatment plants processes and operations optimization

    NASA Astrophysics Data System (ADS)

    Assmann, Céline; Scott, Amanda; Biller, Dondra

    2017-08-01

    Organic measurements, such as biological oxygen demand (BOD) and chemical oxygen demand (COD) were developed decades ago in order to measure organics in water. Today, these time-consuming measurements are still used as parameters to check the water treatment quality; however, the time required to generate a result, ranging from hours to days, does not allow COD or BOD to be useful process control parameters - see (1) Standard Method 5210 B; 5-day BOD Test, 1997, and (2) ASTM D1252; COD Test, 2012. Online organic carbon monitoring allows for effective process control because results are generated every few minutes. Though it does not replace BOD or COD measurements still required for compliance reporting, it allows for smart, data-driven and rapid decision-making to improve process control and optimization or meet compliances. Thanks to the smart interpretation of generated data and the capability to now take real-time actions, municipal drinking water and wastewater treatment facility operators can positively impact their OPEX (operational expenditure) efficiencies and their capabilities to meet regulatory requirements. This paper describes how three municipal wastewater and drinking water plants gained process insights, and determined optimization opportunities thanks to the implementation of online total organic carbon (TOC) monitoring.

  17. Optimization of entanglement witnesses

    NASA Astrophysics Data System (ADS)

    Lewenstein, M.; Kraus, B.; Cirac, J. I.; Horodecki, P.

    2000-11-01

    An entanglement witness (EW) is an operator that allows the detection of entangled states. We give necessary and sufficient conditions for such operators to be optimal, i.e., to detect entangled states in an optimal way. We show how to optimize general EW, and then we particularize our results to the nondecomposable ones; the latter are those that can detect positive partial transpose entangled states (PPTES's). We also present a method to systematically construct and optimize this last class of operators based on the existence of ``edge'' PPTES's, i.e., states that violate the range separability criterion [Phys. Lett. A 232, 333 (1997)] in an extreme manner. This method also permits a systematic construction of nondecomposable positive maps (PM's). Our results lead to a sufficient condition for entanglement in terms of nondecomposable EW's and PM's. Finally, we illustrate our results by constructing optimal EW acting on H=C2⊗C4. The corresponding PM's constitute examples of PM's with minimal ``qubit'' domains, or-equivalently-minimal Hermitian conjugate codomains.

  18. Operation management of daily economic dispatch using novel hybrid particle swarm optimization and gravitational search algorithm with hybrid mutation strategy

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Huang, Song; Ji, Zhicheng

    2017-07-01

    This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.

  19. Optimized tomography of continuous variable systems using excitation counting

    NASA Astrophysics Data System (ADS)

    Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang

    2016-11-01

    We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

  20. High speed civil transport aerodynamic optimization

    NASA Technical Reports Server (NTRS)

    Ryan, James S.

    1994-01-01

    This is a report of work in support of the Computational Aerosciences (CAS) element of the Federal HPCC program. Specifically, CFD and aerodynamic optimization are being performed on parallel computers. The long-range goal of this work is to facilitate teraflops-rate multidisciplinary optimization of aerospace vehicles. This year's work is targeted for application to the High Speed Civil Transport (HSCT), one of four CAS grand challenges identified in the HPCC FY 1995 Blue Book. This vehicle is to be a passenger aircraft, with the promise of cutting overseas flight time by more than half. To meet fuel economy, operational costs, environmental impact, noise production, and range requirements, improved design tools are required, and these tools must eventually integrate optimization, external aerodynamics, propulsion, structures, heat transfer, controls, and perhaps other disciplines. The fundamental goal of this project is to contribute to improved design tools for U.S. industry, and thus to the nation's economic competitiveness.

  1. DTI measures identify mild and moderate TBI cases among patients with complex health problems: A receiver operating characteristic analysis of U.S. veterans.

    PubMed

    Main, Keith L; Soman, Salil; Pestilli, Franco; Furst, Ansgar; Noda, Art; Hernandez, Beatriz; Kong, Jennifer; Cheng, Jauhtai; Fairchild, Jennifer K; Taylor, Joy; Yesavage, Jerome; Wesson Ashford, J; Kraemer, Helena; Adamson, Maheen M

    2017-01-01

    Standard MRI methods are often inadequate for identifying mild traumatic brain injury (TBI). Advances in diffusion tensor imaging now provide potential biomarkers of TBI among white matter fascicles (tracts). However, it is still unclear which tracts are most pertinent to TBI diagnosis. This study ranked fiber tracts on their ability to discriminate patients with and without TBI. We acquired diffusion tensor imaging data from military veterans admitted to a polytrauma clinic (Overall n  = 109; Age: M  = 47.2, SD  = 11.3; Male: 88%; TBI: 67%). TBI diagnosis was based on self-report and neurological examination. Fiber tractography analysis produced 20 fiber tracts per patient. Each tract yielded four clinically relevant measures (fractional anisotropy, mean diffusivity, radial diffusivity, and axial diffusivity). We applied receiver operating characteristic (ROC) analyses to identify the most diagnostic tract for each measure. The analyses produced an optimal cutpoint for each tract. We then used kappa coefficients to rate the agreement of each cutpoint with the neurologist's diagnosis. The tract with the highest kappa was most diagnostic. As a check on the ROC results, we performed a stepwise logistic regression on each measure using all 20 tracts as predictors. We also bootstrapped the ROC analyses to compute the 95% confidence intervals for sensitivity, specificity, and the highest kappa coefficients. The ROC analyses identified two fiber tracts as most diagnostic of TBI: the left cingulum (LCG) and the left inferior fronto-occipital fasciculus (LIF). Like ROC, logistic regression identified LCG as most predictive for the FA measure but identified the right anterior thalamic tract (RAT) for the MD, RD, and AD measures. These findings are potentially relevant to the development of TBI biomarkers. Our methods also demonstrate how ROC analysis may be used to identify clinically relevant variables in the TBI population.

  2. Optimized autonomous operations of a 20 K space hydrogen sorption cryocooler

    NASA Astrophysics Data System (ADS)

    Borders, J.; Morgante, G.; Prina, M.; Pearson, D.; Bhandari, P.

    2004-06-01

    activation of the system, particularly useful in case of restarts after inadvertent shutdowns arising from malfunctions in the spacecraft. The capacity of the system to detect J-T plugs was increased to the point that the cooler is able to autonomously identify actual contaminants clogging from gas flow reductions due to off-nominal operating conditions. Once a plug is confirmed, the software autonomously energizes, and subsequently turns off, a J-T defrost heater until the clog is removed, bringing the system back to normal operating conditions. In this paper, all the cooler Operational Modes are presented, together with the description of the logic structure of the procedures and the advantages they produce for the operations.

  3. Genetic programming assisted stochastic optimization strategies for optimization of glucose to gluconic acid fermentation.

    PubMed

    Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D

    2002-01-01

    This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.

  4. Simulating Operations at a Spaceport

    NASA Technical Reports Server (NTRS)

    Nevins, Michael R.

    2007-01-01

    SPACESIM is a computer program for detailed simulation of operations at a spaceport. SPACESIM is being developed to greatly improve existing spaceports and to aid in designing, building, and operating future spaceports, given that there is a worldwide trend in spaceport operations from very expensive, research- oriented launches to more frequent commercial launches. From an operational perspective, future spaceports are expected to resemble current airports and seaports, for which it is necessary to resolve issues of safety, security, efficient movement of machinery and people, cost effectiveness, timeliness, and maximizing effectiveness in utilization of resources. Simulations can be performed, for example, to (1) simultaneously analyze launches of reusable and expendable rockets and identify bottlenecks arising from competition for limited resources or (2) perform what-if scenario analyses to identify optimal scenarios prior to making large capital investments. SPACESIM includes an object-oriented discrete-event-simulation engine. (Discrete- event simulation has been used to assess processes at modern seaports.) The simulation engine is built upon the Java programming language for maximum portability. Extensible Markup Language (XML) is used for storage of data to enable industry-standard interchange of data with other software. A graphical user interface facilitates creation of scenarios and analysis of data.

  5. Overseas Contingency Operations: OMB and DOD Should Revise the Criteria for Determining Eligible Costs and Identify the Costs Likely to Endure Long Term

    DTIC Science & Technology

    2017-01-01

    OVERSEAS CONTINGENCY OPERATIONS OMB and DOD Should Revise the Criteria for Determining Eligible Costs and Identify the... CONTINGENCY OPERATIONS OMB and DOD Should Revise the Criteria for Determining Eligible Costs and Identify the Costs Likely to Endure Long Term Why GAO...billion in funding for OCO. While DOD’s OCO budget request has included amounts for contingency operations primarily in Iraq and Afghanistan, more

  6. Definitive screening design enables optimization of LC-ESI-MS/MS parameters in proteomics.

    PubMed

    Aburaya, Shunsuke; Aoki, Wataru; Minakuchi, Hiroyoshi; Ueda, Mitsuyoshi

    2017-12-01

    In proteomics, more than 100,000 peptides are generated from the digestion of human cell lysates. Proteome samples have a broad dynamic range in protein abundance; therefore, it is critical to optimize various parameters of LC-ESI-MS/MS to comprehensively identify these peptides. However, there are many parameters for LC-ESI-MS/MS analysis. In this study, we applied definitive screening design to simultaneously optimize 14 parameters in the operation of monolithic capillary LC-ESI-MS/MS to increase the number of identified proteins and/or the average peak area of MS1. The simultaneous optimization enabled the determination of two-factor interactions between LC and MS. Finally, we found two parameter sets of monolithic capillary LC-ESI-MS/MS that increased the number of identified proteins by 8.1% or the average peak area of MS1 by 67%. The definitive screening design would be highly useful for high-throughput analysis of the best parameter set in LC-ESI-MS/MS systems.

  7. Deriving adaptive operating rules of hydropower reservoirs using time-varying parameters generated by the EnKF

    NASA Astrophysics Data System (ADS)

    Feng, Maoyuan; Liu, Pan; Guo, Shenglian; Shi, Liangsheng; Deng, Chao; Ming, Bo

    2017-08-01

    Operating rules have been used widely to decide reservoir operations because of their capacity for coping with uncertain inflow. However, stationary operating rules lack adaptability; thus, under changing environmental conditions, they cause inefficient reservoir operation. This paper derives adaptive operating rules based on time-varying parameters generated using the ensemble Kalman filter (EnKF). A deterministic optimization model is established to obtain optimal water releases, which are further taken as observations of the reservoir simulation model. The EnKF is formulated to update the operating rules sequentially, providing a series of time-varying parameters. To identify the index that dominates the variations of the operating rules, three hydrologic factors are selected: the reservoir inflow, ratio of future inflow to current available water, and available water. Finally, adaptive operating rules are derived by fitting the time-varying parameters with the identified dominant hydrologic factor. China's Three Gorges Reservoir was selected as a case study. Results show that (1) the EnKF has the capability of capturing the variations of the operating rules, (2) reservoir inflow is the factor that dominates the variations of the operating rules, and (3) the derived adaptive operating rules are effective in improving hydropower benefits compared with stationary operating rules. The insightful findings of this study could be used to help adapt reservoir operations to mitigate the effects of changing environmental conditions.

  8. Structural optimization for joined-wing synthesis

    NASA Technical Reports Server (NTRS)

    Gallman, John W.; Kroo, Ilan M.

    1992-01-01

    The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.

  9. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  10. Identifying the optimal depth for mussel suspended culture in shallow and turbid environments

    NASA Astrophysics Data System (ADS)

    Filgueira, Ramón; Grant, Jon; Petersen, Jens Kjerulf

    2018-02-01

    Bivalve aquaculture is commonly carried out in shallow water systems, which are susceptible to resuspension of benthic particulate matter by natural processes such as tidal currents, winds and wave action, as well as human activity. The resuspended material can alter the availability of food particles for cultured bivalves. The effect of resuspended material on bivalve bioenergetics and growth is a function of the quality and concentration of resuspended particles and background diet in the water column. Given the potential for positive or negative impacts on bivalve growth and consequently on farm productivity, farmers must position the cultured biomass at the appropriate depth to benefit from or mitigate the impact of this resuspended material. A combination of field measurements, a 1-D vertical resuspension model and a bioenergetic model for mussels based on Dynamic Energy Budget (DEB) theory has been carried out for a mussel farm in Skive Fjord, a shallow Danish fjord, with the aim of identifying the optimal depth for culture. Observations at the farm location revealed that horizontal advection is more important than vertical resuspension during periods with predominant Eastern winds. In addition, high background seston in the water column reduces the impact of resuspension on the available food for mussels. The simulation of different scenarios in terms of food availability suggested minimal effects of resuspension on mussel growth. Based on this finding and the fact that phytoplankton concentration, the main food source for mussels, is usually higher in the upper part of the water column, suspended culture in the top 3 m of the water column seems to be the optimal practice to produce mussels in Skive Fjord.

  11. Nonlinear bioheat transfer models and multi-objective numerical optimization of the cryosurgery operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudryashov, Nikolay A.; Shilnikov, Kirill E.

    Numerical computation of the three dimensional problem of the freezing interface propagation during the cryosurgery coupled with the multi-objective optimization methods is used in order to improve the efficiency and safety of the cryosurgery operations performing. Prostate cancer treatment and cutaneous cryosurgery are considered. The heat transfer in soft tissue during the thermal exposure to low temperature is described by the Pennes bioheat model and is coupled with an enthalpy method for blurred phase change computations. The finite volume method combined with the control volume approximation of the heat fluxes is applied for the cryosurgery numerical modeling on the tumormore » tissue of a quite arbitrary shape. The flux relaxation approach is used for the stability improvement of the explicit finite difference schemes. The method of the additional heating elements mounting is studied as an approach to control the cellular necrosis front propagation. Whereas the undestucted tumor tissue and destucted healthy tissue volumes are considered as objective functions, the locations of additional heating elements in cutaneous cryosurgery and cryotips in prostate cancer cryotreatment are considered as objective variables in multi-objective problem. The quasi-gradient method is proposed for the searching of the Pareto front segments as the multi-objective optimization problem solutions.« less

  12. Quantitative Assessment of In-solution Digestion Efficiency Identifies Optimal Protocols for Unbiased Protein Analysis*

    PubMed Central

    León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.

    2013-01-01

    The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921

  13. Optimizing Wellfield Operation in a Variable Power Price Regime.

    PubMed

    Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus

    2016-01-01

    Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities. © 2015 The Authors. Groundwater published by Wiley

  14. REopt Improves the Operations of Alcatraz's Solar PV-Battery-Diesel Hybrid System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olis, Daniel R; Walker, H. A; Van Geet, Otto D

    This poster identifies operations improvement strategies for a photovoltaic (PV)-battery-diesel hybrid system at the National Park Service's Alcatraz Island using NREL's REopt analysis tool. The current 'cycle charging' strategy results in significant curtailing of energy production from the PV array, requiring excessive diesel use, while also incurring high wear on batteries without benefit of improved efficiency. A simple 'load following' strategy results in near optimal operating cost reduction.

  15. New knowledge-based genetic algorithm for excavator boom structural optimization

    NASA Astrophysics Data System (ADS)

    Hua, Haiyan; Lin, Shuwen

    2014-03-01

    Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.

  16. Estimates of Optimal Operating Conditions for Hydrogen-Oxygen Cesium-Seeded Magnetohydrodynamic Power Generator

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Nichols, L. D.

    1977-01-01

    The value of percent seed, oxygen to fuel ratio, combustion pressure, Mach number, and magnetic field strength which maximize either the electrical conductivity or power density at the entrance of an MHD power generator was obtained. The working fluid is the combustion product of H2 and O2 seeded with CsOH. The ideal theoretical segmented Faraday generator along with an empirical form found from correlating the data of many experimenters working with generators of different sizes, electrode configurations, and working fluids, are investigated. The conductivity and power densities optimize at a seed fraction of 3.5 mole percent and an oxygen to hydrogen weight ratio of 7.5. The optimum values of combustion pressure and Mach number depend on the operating magnetic field strength.

  17. Multi-Objective Optimization of Moving-magnet Linear Oscillatory Motor Using Response Surface Methodology with Quantum-Behaved PSO Operator

    NASA Astrophysics Data System (ADS)

    Lei, Meizhen; Wang, Liqiang

    2018-01-01

    To reduce the difficulty of manufacturing and increase the magnetic thrust density, a moving-magnet linear oscillatory motor (MMLOM) without inner-stators was Proposed. To get the optimal design of maximum electromagnetic thrust with minimal permanent magnetic material, firstly, the 3D finite element analysis (FEA) model of the MMLOM was built and verified by comparison with prototype experiment result. Then the influence of design parameters of permanent magnet (PM) on the electromagnetic thrust was systematically analyzed by the 3D FEA to get the design parameters. Secondly, response surface methodology (RSM) was employed to build the response surface model of the new MMLOM, which can obtain an analytical model of the PM volume and thrust. Then a multi-objective optimization methods for design parameters of PM, using response surface methodology (RSM) with a quantum-behaved PSO (QPSO) operator, was proposed. Then the way to choose the best design parameters of PM among the multi-objective optimization solution sets was proposed. Then the 3D FEA of the optimal design candidates was compared. The comparison results showed that the proposed method can obtain the best combination of the geometric parameters of reducing the PM volume and increasing the thrust.

  18. Sootblowing optimization for improved boiler performance

    DOEpatents

    James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J.

    2012-12-25

    A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.

  19. Sootblowing optimization for improved boiler performance

    DOEpatents

    James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J

    2013-07-30

    A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.

  20. Flight Test of an Adaptive Configuration Optimization System for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Georgie, Jennifer; Barnicki, Joseph S.

    1999-01-01

    A NASA Dryden Flight Research Center program explores the practical application of real-time adaptive configuration optimization for enhanced transport performance on an L-1011 aircraft. This approach is based on calculation of incremental drag from forced-response, symmetric, outboard aileron maneuvers. In real-time operation, the symmetric outboard aileron deflection is directly optimized, and the horizontal stabilator and angle of attack are indirectly optimized. A flight experiment has been conducted from an onboard research engineering test station, and flight research results are presented herein. The optimization system has demonstrated the capability of determining the minimum drag configuration of the aircraft in real time. The drag-minimization algorithm is capable of identifying drag to approximately a one-drag-count level. Optimizing the symmetric outboard aileron position realizes a drag reduction of 2-3 drag counts (approximately 1 percent). Algorithm analysis of maneuvers indicate that two-sided raised-cosine maneuvers improve definition of the symmetric outboard aileron drag effect, thereby improving analysis results and consistency. Ramp maneuvers provide a more even distribution of data collection as a function of excitation deflection than raised-cosine maneuvers provide. A commercial operational system would require airdata calculations and normal output of current inertial navigation systems; engine pressure ratio measurements would be optional.

  1. State-of-the-Art Diagnosis and Treatment of Melanoma: Optimal Multidetector Computed Tomographic Practice to Identify Metastatic Disease and Review of Innovative Therapeutic Agents.

    PubMed

    Jones, Blake C; Lipson, Evan J; Childers, Brandon; Fishman, Elliot K; Johnson, Pamela T

    The incidence of melanoma has risen dramatically over the past several decades. Oncologists rely on the ability of radiologists to identify subtle radiographic changes representing metastatic and recurrent melanoma in uncommon locations on multidetector computed tomography (MDCT) as the front-line imaging surveillance tool. To accomplish this goal, MDCT acquisition and display must be optimized and radiologist interpretation and search patterns must be tailored to identify the unique and often subtle metastatic lesions of melanoma. This article describes MDCT acquisition and display techniques that optimize the visibility of melanoma lesions, such as high-contrast display windows and multiplanar reconstructions. In addition, innovative therapies for melanoma, such as immunotherapy and small-molecule therapy, have altered clinical management and outcomes and have also changed the spectrum of therapeutic complications that can be detected on MDCT. Recent advances in melanoma therapy and potential complications that the radiologist can identify on MDCT are reviewed.

  2. Evolution of Query Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hameurlain, Abdelkader; Morvan, Franck

    Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).

  3. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  4. SPECT System Optimization Against A Discrete Parameter Space

    PubMed Central

    Meng, L. J.; Li, N.

    2013-01-01

    In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609

  5. Operational and Strategic Implementation of Dynamic Line Rating for Optimized Wind Energy Generation Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentle, Jake Paul

    2016-12-01

    One primary goal of rendering today’s transmission grid “smarter” is to optimize and better manage its power transfer capacity in real time. Power transfer capacity is affected by three main elements: stability, voltage limits, and thermal ratings. All three are critical, but thermal ratings represent the greatest opportunity to quickly, reliably and economically utilize the grid’s true capacity. With the “Smarter Grid”, new solutions have been sought to give operators a better grasp on real time conditions, allowing them to manage and extend the usefulness of existing transmission infrastructure in a safe and reliable manner. The objective of the INLmore » Wind Program is to provide industry a Dynamic Line Rating (DLR) solution that is state of the art as measured by cost, accuracy and dependability, to enable human operators to make informed decisions and take appropriate actions without human or system overloading and impacting the reliability of the grid. In addition to mitigating transmission line congestion to better integrate wind, DLR also offers the opportunity to improve the grid with optimized utilization of transmission lines to relieve congestion in general. As wind-generated energy has become a bigger part of the nation’s energy portfolio, researchers have learned that wind not only turns turbine blades to generate electricity, but can cool transmission lines and increase transfer capabilities significantly, sometimes up to 60 percent. INL’s DLR development supports EERE and The Wind Energy Technology Office’s goals by informing system planners and grid operators of available transmission capacity, beyond typical Static Line Ratings (SLR). SLRs are based on a fixed set of conservative environmental conditions to establish a limit on the amount of current lines can safely carry without overheating. Using commercially available weather monitors mounted on industry informed custom brackets developed by INL in combination with

  6. Increase of Gas-Turbine Plant Efficiency by Optimizing Operation of Compressors

    NASA Astrophysics Data System (ADS)

    Matveev, V.; Goriachkin, E.; Volkov, A.

    2018-01-01

    The article presents optimization method for improving of the working process of axial compressors of gas turbine engines. Developed method allows to perform search for the best geometry of compressor blades automatically by using optimization software IOSO and CFD software NUMECA Fine/Turbo. The calculation of the compressor parameters was performed for work and stall point of its performance map on each optimization step. Study was carried out for seven-stage high-pressure compressor and three-stage low-pressure compressors. As a result of optimization, improvement of efficiency was achieved for all investigated compressors.

  7. Optimal architectures for long distance quantum communication.

    PubMed

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D; Jiang, Liang

    2016-02-15

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥ 1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances.

  8. Optimal architectures for long distance quantum communication

    PubMed Central

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D.; Jiang, Liang

    2016-01-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances. PMID:26876670

  9. Optimal architectures for long distance quantum communication

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Li, Linshu; Kim, Jungsang; Lütkenhaus, Norbert; Lukin, Mikhail D.; Jiang, Liang

    2016-02-01

    Despite the tremendous progress of quantum cryptography, efficient quantum communication over long distances (≥1000 km) remains an outstanding challenge due to fiber attenuation and operation errors accumulated over the entire communication distance. Quantum repeaters (QRs), as a promising approach, can overcome both photon loss and operation errors, and hence significantly speedup the communication rate. Depending on the methods used to correct loss and operation errors, all the proposed QR schemes can be classified into three categories (generations). Here we present the first systematic comparison of three generations of quantum repeaters by evaluating the cost of both temporal and physical resources, and identify the optimized quantum repeater architecture for a given set of experimental parameters for use in quantum key distribution. Our work provides a roadmap for the experimental realizations of highly efficient quantum networks over transcontinental distances.

  10. An Optimization Approach to Coexistence of Bluetooth and Wi-Fi Networks Operating in ISM Environment

    NASA Astrophysics Data System (ADS)

    Klajbor, Tomasz; Rak, Jacek; Wozniak, Jozef

    Unlicensed ISM band is used by various wireless technologies. Therefore, issues related to ensuring the required efficiency and quality of operation of coexisting networks become essential. The paper addresses the problem of mutual interferences between IEEE 802.11b transmitters (commercially named Wi-Fi) and Bluetooth (BT) devices.An optimization approach to modeling the topology of BT scatternets is introduced, resulting in more efficient utilization of ISM environment consisting of BT and Wi-Fi networks. To achieve it, the Integer Linear Programming approach has been proposed. Example results presented in the paper illustrate significant benefits of using the proposed modeling strategy.

  11. Why don't Practitioners use Reservoir Optimization Methods? Results from a Survey of UK Water Managers

    NASA Astrophysics Data System (ADS)

    Dobson, B.; Pianosi, F.; Wagener, T.

    2016-12-01

    Extensive scientific literature exists on the study of how operation decisions in water resource systems can be made more effectively through the use of optimization methods. However, to the best of the authors' knowledge, there is little in the literature on the implementation of these optimization methods by practitioners. We have performed a survey among UK reservoir operators to assess the current state of method implementation in practice. We also ask questions to assess the potential for implementation of operation optimization. This will help academics to target industry in their current research, identify any misconceptions in industry about the area and open new branches of research for which there is an unsatisfied demand. The UK is a good case study because the regulatory framework is changing to impose "no build" solutions for supply issues, as well as planning across entire water resource systems rather than individual components. Additionally there is a high appetite for efficiency due to the water industry's privatization and most operators are part of companies that control multiple water resources, increasing the potential for cooperation and coordination.

  12. Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?

    ERIC Educational Resources Information Center

    Ravinder, Handanhal V.

    2013-01-01

    A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…

  13. Identifying optimal postmarket surveillance strategies for medical and surgical devices: implications for policy, practice and research.

    PubMed

    Gagliardi, Anna R; Umoquit, Muriah; Lehoux, Pascale; Ross, Sue; Ducey, Ariel; Urbach, David R

    2013-03-01

    Non-drug technologies offer many benefits, but have been associated with adverse events, prompting calls for improved postmarket surveillance. There is little empirical research to guide the development of such a system. The purpose of this study was to identify optimal postmarket surveillance strategies for medical and surgical devices. Qualitative methods were used for sampling, data collection and analysis. Stakeholders from Canada and the USA representing different roles and perspectives were first interviewed to identify examples and characteristics of different surveillance strategies. These stakeholders and others they recommended were then assembled at a 1-day nominal group meeting to discuss and prioritise the components of a postmarket device surveillance system, and research needed to achieve such a system. Consultations were held with 37 participants, and 47 participants attended the 1-day meeting. They recommended a multicomponent system including reporting by facilities, clinicians and patients, supported with some external surveillance for validation and real-time trials for high-risk devices. Many considerations were identified that constitute desirable characteristics of, and means by which to implement such a system. An overarching network was envisioned to broker linkages, establish a shared minimum dataset, and support communication and decision making. Numerous research questions were identified, which could be pursued in tandem with phased implementation of the system. These findings provide unique guidance for establishing a device safety network that is based on existing initiatives, and could be expanded and evaluated in a prospective, phased fashion as it was developed.

  14. Optimal design of reverse osmosis module networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maskan, F.; Wiley, D.E.; Johnston, L.P.M.

    2000-05-01

    The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found thatmore » optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.« less

  15. Identifying lubricant options for compressor bearing designs

    NASA Astrophysics Data System (ADS)

    Karnaz, J.; Seeton, C.; Dixon, L.

    2017-08-01

    Today’s refrigeration and air conditioning market is not only driven by the environmental aspects of the refrigerants, but also by the energy efficiency and reliability of system operation. Numerous types of compressor designs are used in refrigeration and air conditioning applications which means that different bearings are used; and in some cases, multiple bearing types within a single compressor. Since only one lubricant is used, it is important to try to optimize the lubricant to meet the various demands and requirements for operation. This optimization entails investigating different types of lubricant chemistries, viscosities, and various formulation options. What makes evaluating these options more challenging is the refrigerant which changes the properties of the lubricant delivered to the bearing. Once the lubricant and refrigerant interaction are understood, through various test methods, then work can start on collaborating with compressor engineers on identifying the lubricant chemistry and formulation options. These interaction properties are important to the design engineer to make decisions on the adequacy of the lubricant before compressor tests are started. This paper will discuss the process to evaluate lubricants for various types of compressors and bearing design with focus on what’s needed for current refrigerant trends. In addition, the paper will show how the lubricant chemistry choice can be manipulated through understanding of the bearing design and knowledge of interaction with the refrigerant to maximize performance. Emphasis will be placed on evaluation of synthetic lubricants for both natural and synthetic low GWP refrigerants.

  16. A pilot study for determining the optimal operation condition for simultaneously controlling the emissions of PCDD/Fs and PAHs from the iron ore sintering process.

    PubMed

    Chen, Yu-Cheng; Tsai, Perng-Jy; Mou, Jin-Luh; Kuo, Yu-Chieh; Wang, Shih-Min; Young, Li-Hao; Wang, Ya-Fen

    2012-09-01

    In this study, the cost-benefit analysis technique was developed and incorporated into the Taguchi experimental design to determine the optimal operation combination for the purpose of providing a technique solution for controlling both emissions of PCDD/Fs and PAHs, and increasing both the sinter productivity (SP) and sinter strength (SS) simultaneously. Four operating parameters, including the water content, suction pressure, bed height, and type of hearth layer, were selected and all experimental campaigns were conducted on a pilot-scale sinter pot to simulate various sintering operating conditions of a real-scale sinter plant. The resultant optimal combination could reduce the total carcinogenic emissions arising from both emissions of PCDD/Fs and PAHs by 49.8%, and increase the sinter benefit associated with the increase in both SP and SS by 10.1%, as in comparison with the operation condition currently used in the real plant. The ANOVA results indicate that the suction pressure was the most dominant parameter in determining the optimal operation combination. The above result was theoretically plausible since the higher suction pressure provided more oxygen contents leading to the decrease in both PCDD/F and PAH emissions. But it should be noted that the results obtained from the present study were based on pilot scale experiments, conducting confirmation tests in a real scale plant are still necessary in the future. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Optimizing Perioperative Decision Making: Improved Information for Clinical Workflow Planning

    PubMed Central

    Doebbeling, Bradley N.; Burton, Matthew M.; Wiebke, Eric A.; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph

    2012-01-01

    Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40–70% of hospital revenues and 30–40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction. PMID:23304284

  18. Optimizing perioperative decision making: improved information for clinical workflow planning.

    PubMed

    Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph

    2012-01-01

    Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.

  19. Optimal reproducibility of gated sestamibi and thallium myocardial perfusion study left ventricular ejection fractions obtained on a solid-state CZT cardiac camera requires operator input.

    PubMed

    Cherk, Martin H; Ky, Jason; Yap, Kenneth S K; Campbell, Patrina; McGrath, Catherine; Bailey, Michael; Kalff, Victor

    2012-08-01

    To evaluate the reproducibility of serial re-acquisitions of gated Tl-201 and Tc-99m sestamibi left ventricular ejection fraction (LVEF) measurements obtained on a new generation solid-state cardiac camera system during myocardial perfusion imaging and the importance of manual operator optimization of left ventricular wall tracking. Resting blinded automated (auto) and manual operator optimized (opt) LVEF measurements were measured using ECT toolbox (ECT) and Cedars-Sinai QGS software in two separate cohorts of 55 Tc-99m sestamibi (MIBI) and 50 thallium (Tl-201) myocardial perfusion studies (MPS) acquired in both supine and prone positions on a cadmium zinc telluride (CZT) solid-state camera system. Resting supine and prone automated LVEF measurements were similarly obtained in a further separate cohort of 52 gated cardiac blood pool scans (GCBPS) for validation of methodology and comparison. Appropriate use of Bland-Altman, chi-squared and Levene's equality of variance tests was used to analyse the resultant data comparisons. For all radiotracer and software combinations, manual checking and optimization of valve planes (+/- centre radius with ECT software) resulted in significant improvement in MPS LVEF reproducibility that approached that of planar GCBPS. No difference was demonstrated between optimized MIBI/Tl-201 QGS and planar GCBPS LVEF reproducibility (P = .17 and P = .48, respectively). ECT required significantly more manual optimization compared to QGS software in both supine and prone positions independent of radiotracer used (P < .02). Reproducibility of gated sestamibi and Tl-201 LVEF measurements obtained during myocardial perfusion imaging with ECT toolbox or QGS software packages using a new generation solid-state cardiac camera with improved image quality approaches that of planar GCBPS however requires visual quality control and operator optimization of left ventricular wall tracking for best results. Using this superior cardiac technology, Tl-201

  20. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  1. Identifying optimal remotely-sensed variables for ecosystem monitoring in Colorado Plateau drylands

    USGS Publications Warehouse

    Poitras, Travis; Villarreal, Miguel; Waller, Eric K.; Nauman, Travis; Miller, Mark E.; Duniway, Michael C.

    2018-01-01

    Water-limited ecosystems often recover slowly following anthropogenic or natural disturbance. Multitemporal remote sensing can be used to monitor ecosystem recovery after disturbance; however, dryland vegetation cover can be challenging to accurately measure due to sparse cover and spectral confusion between soils and non-photosynthetic vegetation. With the goal of optimizing a monitoring approach for identifying both abrupt and gradual vegetation changes, we evaluated the ability of Landsat-derived spectral variables to characterize surface variability of vegetation cover and bare ground across a range of vegetation community types. Using three year composites of Landsat data, we modeled relationships between spectral information and field data collected at monitoring sites near Canyonlands National Park, UT. We also developed multiple regression models to assess improvement over single variables. We found that for all vegetation types, percent cover bare ground could be accurately modeled with single indices that included a combination of red and shortwave infrared bands, while near infrared-based vegetation indices like NDVI worked best for quantifying tree cover and total live vegetation cover in woodlands. We applied four models to characterize the spatial distribution of putative grassland ecological states across our study area, illustrating how this approach can be implemented to guide dryland ecosystem management.

  2. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  3. Ergonomics perspective for identifying and reducing internal operative flow disruption for laparoscopic urological surgery.

    PubMed

    Al-Hakim, Latif; Xiao, Jiaquan; Sengupta, Shomik

    2017-12-01

    The aim of this study is to examine operative flow disruption that occurs inside the surgical field, (internal operative flow disruption (OFD)), during urological laparoscopies, and to relate those events to external ergonomics environment in terms of monitor location, level of instruments' handles, and location of surgical team members. According to the our best knowledge, this is the first study of its kind. A combination of real and video-aided observational study was conducted in the operating rooms at hospitals in Australia and China. Brain storming sessions were first conducted to identify the main internal OFD events, and the observable reasons, potential external, and latent ergonomic factors were listed. A prospective observational study was then conducted. The observer's records and the related video records of internal surgical fields were analysed. Procedures were categorised into groups based on similarity in ergonomics environment. The mapping process revealed 39 types of internal OFD events resulted from six reasons. A total of 24 procedures were selected and arranged into two groups, each with twelve procedures. Group A was carried out under satisfactory ergonomics environment, while Group B was conducted under unsatisfactory ergonomics environment. A total of 1178 OFD events were detected delaying the total observed operative times (2966 min) by 220 min (7.43%). Average OFD/h in group A was less than 15, while in group B about 29 OFD/h. There are two main latent ergonomics factors affecting the surgeon's performance; non-physiological posture and long-period static posture. The delays and number of internal OFD were nearly doubled where procedures were conducted under unsatisfactory external ergonomics environment. Some events such as stopping operation and irrelevant conversations during long procedures may have a positive influence on the surgeon's performance.

  4. Process optimization of helium cryo plant operation for SST-1 superconducting magnet system

    NASA Astrophysics Data System (ADS)

    Panchal, P.; Panchal, R.; Patel, R.; Mahesuriya, G.; Sonara, D.; Srikanth G, L. N.; Garg, A.; Christian, D.; Bairagi, N.; Sharma, R.; Patel, K.; Shah, P.; Nimavat, H.; Purwar, G.; Patel, J.; Tanna, V.; Pradhan, S.

    2017-02-01

    Several plasma discharge campaigns have been carried out in steady state superconducting tokamak (SST-1). SST-1 has toroidal field (TF) and poloidal field (PF) superconducting magnet system (SCMS). The TF coils system is cooled to 4.5 - 4.8 K at 1.5 - 1.7 bar(a) under two phase flow condition using 1.3 kW helium cryo plant. Experience revealed that the PF coils demand higher pressure heads even at lower temperatures in comparison to TF coils because of its longer hydraulic path lengths. Thermal run away are observed within PF coils because of single common control valve for all PF coils in distribution system having non-uniform lengths. Thus it is routine practice to stop the cooling of PF path and continue only TF cooling at SCMS inlet temperature of ˜ 14 K. In order to achieve uniform cool down, different control logic is adopted to make cryo stable system. In adopted control logic, the SCMS are cooled down to 80 K at constant inlet pressure of 9 bar(a). After authorization of turbine A/B, the SCMS inlet pressure is gradually controlled by refrigeration J-T valve to achieve stable operation window for cryo system. This paper presents process optimization for cryo plant operation for SST-1 SCMS.

  5. Satellite image collection optimization

    NASA Astrophysics Data System (ADS)

    Martin, William

    2002-09-01

    Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite.

  6. Diagnostic performance of BMI percentiles to identify adolescents with metabolic syndrome.

    PubMed

    Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C

    2014-02-01

    To compare the diagnostic performance of the Centers for Disease Control and Prevention (CDC) and FITNESSGRAM (FGram) BMI standards for quantifying metabolic risk in youth. Adolescents in the NHANES (n = 3385) were measured for anthropometric variables and metabolic risk factors. BMI percentiles were calculated, and youth were categorized by weight status (using CDC and FGram thresholds). Participants were also categorized by presence or absence of metabolic syndrome. The CDC and FGram standards were compared by prevalence of metabolic abnormalities, various diagnostic criteria, and odds of metabolic syndrome. Receiver operating characteristic curves were also created to identify optimal BMI percentiles to detect metabolic syndrome. The prevalence of metabolic syndrome in obese youth was 19% to 35%, compared with <2% in the normal-weight groups. The odds of metabolic syndrome for obese boys and girls were 46 to 67 and 19 to 22 times greater, respectively, than for normal-weight youth. The receiver operating characteristic analyses identified optimal thresholds similar to the CDC standards for boys and the FGram standards for girls. Overall, BMI thresholds were more strongly associated with metabolic syndrome in boys than in girls. Both the CDC and FGram standards are predictive of metabolic syndrome. The diagnostic utility of the CDC thresholds outperformed the FGram values for boys, whereas FGram standards were slightly better thresholds for girls. The use of a common set of thresholds for school and clinical applications would provide advantages for public health and clinical research and practice.

  7. Optimizing transformations of stencil operations for parallel cache-based architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassetti, F.; Davis, K.

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation andmore » applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.« less

  8. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  9. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  10. NCC-AUC: an AUC optimization method to identify multi-biomarker panel for cancer prognosis from genomic and clinical data.

    PubMed

    Zou, Meng; Liu, Zhaoqi; Zhang, Xiang-Sun; Wang, Yong

    2015-10-15

    In prognosis and survival studies, an important goal is to identify multi-biomarker panels with predictive power using molecular characteristics or clinical observations. Such analysis is often challenged by censored, small-sample-size, but high-dimensional genomic profiles or clinical data. Therefore, sophisticated models and algorithms are in pressing need. In this study, we propose a novel Area Under Curve (AUC) optimization method for multi-biomarker panel identification named Nearest Centroid Classifier for AUC optimization (NCC-AUC). Our method is motived by the connection between AUC score for classification accuracy evaluation and Harrell's concordance index in survival analysis. This connection allows us to convert the survival time regression problem to a binary classification problem. Then an optimization model is formulated to directly maximize AUC and meanwhile minimize the number of selected features to construct a predictor in the nearest centroid classifier framework. NCC-AUC shows its great performance by validating both in genomic data of breast cancer and clinical data of stage IB Non-Small-Cell Lung Cancer (NSCLC). For the genomic data, NCC-AUC outperforms Support Vector Machine (SVM) and Support Vector Machine-based Recursive Feature Elimination (SVM-RFE) in classification accuracy. It tends to select a multi-biomarker panel with low average redundancy and enriched biological meanings. Also NCC-AUC is more significant in separation of low and high risk cohorts than widely used Cox model (Cox proportional-hazards regression model) and L1-Cox model (L1 penalized in Cox model). These performance gains of NCC-AUC are quite robust across 5 subtypes of breast cancer. Further in an independent clinical data, NCC-AUC outperforms SVM and SVM-RFE in predictive accuracy and is consistently better than Cox model and L1-Cox model in grouping patients into high and low risk categories. In summary, NCC-AUC provides a rigorous optimization framework to

  11. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE PAGES

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun; ...

    2017-10-09

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  12. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  13. Memory-optimized shift operator alternating direction implicit finite difference time domain method for plasma

    NASA Astrophysics Data System (ADS)

    Song, Wanjun; Zhang, Hou

    2017-11-01

    Through introducing the alternating direction implicit (ADI) technique and the memory-optimized algorithm to the shift operator (SO) finite difference time domain (FDTD) method, the memory-optimized SO-ADI FDTD for nonmagnetized collisional plasma is proposed and the corresponding formulae of the proposed method for programming are deduced. In order to further the computational efficiency, the iteration method rather than Gauss elimination method is employed to solve the equation set in the derivation of the formulae. Complicated transformations and convolutions are avoided in the proposed method compared with the Z transforms (ZT) ADI FDTD method and the piecewise linear JE recursive convolution (PLJERC) ADI FDTD method. The numerical dispersion of the SO-ADI FDTD method with different plasma frequencies and electron collision frequencies is analyzed and the appropriate ratio of grid size to the minimum wavelength is given. The accuracy of the proposed method is validated by the reflection coefficient test on a nonmagnetized collisional plasma sheet. The testing results show that the proposed method is advantageous for improving computational efficiency and saving computer memory. The reflection coefficient of a perfect electric conductor (PEC) sheet covered by multilayer plasma and the RCS of the objects coated by plasma are calculated by the proposed method and the simulation results are analyzed.

  14. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    PubMed

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  15. Optimization of a chemical identification algorithm

    NASA Astrophysics Data System (ADS)

    Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren

    2010-04-01

    A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.

  16. A multi-factor GIS method to identify optimal geographic locations for electric vehicle (EV) charging stations

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqin; Iman, Kory

    2018-05-01

    Fuel-based transportation is one of the major contributors to poor air quality in the United States. Electric Vehicle (EV) is potentially the cleanest transportation technology to our environment. This research developed a spatial suitability model to identify optimal geographic locations for installing EV charging stations for travelling public. The model takes into account a variety of positive and negative factors to identify prime locations for installing EV charging stations in Wasatch Front, Utah, where automobile emission causes severe air pollution due to atmospheric inversion condition near the valley floor. A walkable factor grid was created to store index scores from input factor layers to determine prime locations. 27 input factors including land use, demographics, employment centers etc. were analyzed. Each factor layer was analyzed to produce a summary statistic table to determine the site suitability. Potential locations that exhibit high EV charging usage were identified and scored. A hot spot map was created to demonstrate high, moderate, and low suitability areas for installing EV charging stations. A spatially well distributed EV charging system was then developed, aiming to reduce "range anxiety" from traveling public. This spatial methodology addresses the complex problem of locating and establishing a robust EV charging station infrastructure for decision makers to build a clean transportation infrastructure, and eventually improve environment pollution.

  17. Optimizing the Operation of Windfarms, Energy Storage and Flexible Loads in Modern Power Systems and Deregulated Electricity Markets

    NASA Astrophysics Data System (ADS)

    Dar, Zamiyad

    The amount of wind energy in power systems is increasing at a significant rate. With this increased penetration, there are certain problems associated with the operation of windfarms which need careful attention. In the operations side, the wake effects of upstream wind turbines on downstream wind turbines can cause a reduction in the total generated power of a windfarm. On the market side, the fluctuation of real-time prices can make the operation of windfarms less profitable. Similarly, the intermittent nature of wind power prevents the windfarms from participating in the day-ahead and forward markets. On the system side, the volatile nature of wind speeds is also an obstacle for windfarms to provide frequency regulation to the system. In this thesis, we address these issues and optimize the operation of windfarms in power systems and deregulated electricity markets. First, the total power generation in a windfarm is maximized by using yaw angle of wind turbines as a control variable. We extend the existing wake models to include the effects of yaw misalignment and wake deflection of wind turbines. A numerical study is performed to find the optimal values of induction factor and yaw misalignment angle of wind turbines in a single row of a windfarm for achieving the maximum total power with wake effects. The numerical study shows that the maximum power is achieved by keeping the induction factor close to 1/3 and only changing the yaw angle to deflect the wake. We then propose a Dynamic Programming Framework (DPF) to maximize the total power production of a windfarm using yaw angle as the control variable. We compare the windfarm efficiency achieved with our DPF with the efficiency values obtained through greedy control strategy and induction factor optimization. We also extend our expressions to a windfarm with multiple rows and columns of turbines and perform simulations on the 3x3 and 4x4 grid topologies. Our results show that the optimal induction factor for

  18. Structure-Guided Lead Optimization of Triazolopyrimidine-Ring Substituents Identifies Potent Plasmodium falciparum Dihydroorotate Dehydrogenase Inhibitors with Clinical Candidate Potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coteron, Jose M.; Marco, Maria; Esquivias, Jorge

    2012-02-27

    Drug therapy is the mainstay of antimalarial therapy, yet current drugs are threatened by the development of resistance. In an effort to identify new potential antimalarials, we have undertaken a lead optimization program around our previously identified triazolopyrimidine-based series of Plasmodium falciparum dihydroorotate dehydrogenase (PfDHODH) inhibitors. The X-ray structure of PfDHODH was used to inform the medicinal chemistry program allowing the identification of a potent and selective inhibitor (DSM265) that acts through DHODH inhibition to kill both sensitive and drug resistant strains of the parasite. This compound has similar potency to chloroquine in the humanized SCID mouse P. falciparum model,more » can be synthesized by a simple route, and rodent pharmacokinetic studies demonstrated it has excellent oral bioavailability, a long half-life and low clearance. These studies have identified the first candidate in the triazolopyrimidine series to meet previously established progression criteria for efficacy and ADME properties, justifying further development of this compound toward clinical candidate status.« less

  19. Whole Genome Re-Sequencing Identifies a Quantitative Trait Locus Repressing Carbon Reserve Accumulation during Optimal Growth in Chlamydomonas reinhardtii

    PubMed Central

    Goold, Hugh Douglas; Nguyen, Hoa Mai; Kong, Fantao; Beyly-Adriano, Audrey; Légeret, Bertrand; Billon, Emmanuelle; Cuiné, Stéphan; Beisson, Fred; Peltier, Gilles; Li-Beisson, Yonghua

    2016-01-01

    Microalgae have emerged as a promising source for biofuel production. Massive oil and starch accumulation in microalgae is possible, but occurs mostly when biomass growth is impaired. The molecular networks underlying the negative correlation between growth and reserve formation are not known. Thus isolation of strains capable of accumulating carbon reserves during optimal growth would be highly desirable. To this end, we screened an insertional mutant library of Chlamydomonas reinhardtii for alterations in oil content. A mutant accumulating five times more oil and twice more starch than wild-type during optimal growth was isolated and named constitutive oil accumulator 1 (coa1). Growth in photobioreactors under highly controlled conditions revealed that the increase in oil and starch content in coa1 was dependent on light intensity. Genetic analysis and DNA hybridization pointed to a single insertional event responsible for the phenotype. Whole genome re-sequencing identified in coa1 a >200 kb deletion on chromosome 14 containing 41 genes. This study demonstrates that, 1), the generation of algal strains accumulating higher reserve amount without compromising biomass accumulation is feasible; 2), light is an important parameter in phenotypic analysis; and 3), a chromosomal region (Quantitative Trait Locus) acts as suppressor of carbon reserve accumulation during optimal growth. PMID:27141848

  20. Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin

    2017-09-01

    Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.

  1. Optimizing clinical operations as part of a global emergency medicine initiative in Kumasi, Ghana: application of Lean manufacturing principals to low-resource health systems.

    PubMed

    Carter, Patrick M; Desmond, Jeffery S; Akanbobnaab, Christopher; Oteng, Rockefeller A; Rominski, Sarah D; Barsan, William G; Cunningham, Rebecca M

    2012-03-01

    Although many global health programs focus on providing clinical care or medical education, improving clinical operations can have a significant effect on patient care delivery, especially in developing health systems without high-level operations management. Lean manufacturing techniques have been effective in decreasing emergency department (ED) length of stay, patient waiting times, numbers of patients leaving without being seen, and door-to-balloon times for ST-elevation myocardial infarction in developed health systems, but use of Lean in low to middle income countries with developing emergency medicine (EM) systems has not been well characterized. To describe the application of Lean manufacturing techniques to improve clinical operations at Komfo Anokye Teaching Hospital (KATH) in Ghana and to identify key lessons learned to aid future global EM initiatives. A 3-week Lean improvement program focused on the hospital admissions process at KATH was completed by a 14-person team in six stages: problem definition, scope of project planning, value stream mapping, root cause analysis, future state planning, and implementation planning. The authors identified eight lessons learned during our use of Lean to optimize the operations of an ED in a global health setting: 1) the Lean process aided in building a partnership with Ghanaian colleagues; 2) obtaining and maintaining senior institutional support is necessary and challenging; 3) addressing power differences among the team to obtain feedback from all team members is critical to successful Lean analysis; 4) choosing a manageable initial project is critical to influence long-term Lean use in a new environment; 5) data intensive Lean tools can be adapted and are effective in a less resourced health system; 6) several Lean tools focused on team problem-solving techniques worked well in a low-resource system without modification; 7) using Lean highlighted that important changes do not require an influx of resources; and

  2. Optimizing Clinical Operations as part of a Global Emergency Medicine Initiative in Kumasi, Ghana: Application of Lean Manufacturing Principals to Low Resource Health Systems

    PubMed Central

    Carter, Patrick M.; Desmond, Jeffery S.; Akanbobnaab, Christopher; Oteng, Rockefeller A.; Rominski, Sarah; Barsan, William G.; Cunningham, Rebecca

    2012-01-01

    Background Although many global health programs focus on providing clinical care or medical education, improving clinical operations can have a significant effect on patient care delivery, especially in developing health systems without high-level operations management. Lean manufacturing techniques have been effective in decreasing emergency department (ED) length of stay, patient waiting times, numbers of patients leaving without being seen, and door-to-balloon times for ST-elevation myocardial infarction in developed health systems; but use of Lean in low to middle income countries with developing emergency medicine systems has not been well characterized. Objectives To describe the application of Lean manufacturing techniques to improve clinical operations at Komfo Anokye Teaching Hospital in Ghana and to identify key lessons learned to aid future global EM initiatives. Methods A three-week Lean improvement program focused on the hospital admissions process at Komfo Anokye Teaching Hospital was completed by a 14-person team in six stages: problem definition, scope of project planning, value stream mapping, root cause analysis, future state planning, and implementation planning. Results The authors identified eight lessons learned during our use of Lean to optimize the operations of an ED in a global health setting: 1) the Lean process aided in building a partnership with Ghanaian colleagues; 2) obtaining and maintaining senior institutional support is necessary and challenging; 3) addressing power differences among the team to obtain feedback from all team members is critical to successful Lean analysis; 4) choosing a manageable initial project is critical to influence long-term Lean use in a new environment; 5) data intensive Lean tools can be adapted and are effective in a less resourced health system; 6) several Lean tools focused on team problem solving techniques worked well in a low resource system without modification; 7) using Lean highlighted that

  3. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  4. Optimization Review: Ogallala Ground Water Contamination Superfund Site, Operable Unit 2 (Tip Top Cleaners), Ogallala, Nebraska

    EPA Pesticide Factsheets

    The Ogallala Ground Water Contamination Superfund site was identified in 1989 through municipal well sampling. Tetrachloroethene (PCE), a solvent commonly used in dry cleaner operations, was the primary ground water target chemical of concern (COC) that..

  5. Real Time Optimal Control of Supercapacitor Operation for Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yusheng; Panwar, Mayank; Mohanpurkar, Manish

    2016-07-01

    Supercapacitors are gaining wider applications in power systems due to fast dynamic response. Utilizing supercapacitors by means of power electronics interfaces for power compensation is a proven effective technique. For applications such as requency restoration if the cost of supercapacitors maintenance as well as the energy loss on the power electronics interfaces are addressed. It is infeasible to use traditional optimization control methods to mitigate the impacts of frequent cycling. This paper proposes a Front End Controller (FEC) using Generalized Predictive Control featuring real time receding optimization. The optimization constraints are based on cost and thermal management to enhance tomore » the utilization efficiency of supercapacitors. A rigorous mathematical derivation is conducted and test results acquired from Digital Real Time Simulator are provided to demonstrate effectiveness.« less

  6. Determining effective forecast horizons for multi-purpose reservoirs with short- and long-term operating objectives

    NASA Astrophysics Data System (ADS)

    Luchner, Jakob; Anghileri, Daniela; Castelletti, Andrea

    2017-04-01

    Real-time control of multi-purpose reservoirs can benefit significantly from hydro-meteorological forecast products. Because of their reliability, the most used forecasts range on time scales from hours to few days and are suitable for short-term operation targets such as flood control. In recent years, hydro-meteorological forecasts have become more accurate and reliable on longer time scales, which are more relevant to long-term reservoir operation targets such as water supply. While the forecast quality of such products has been studied extensively, the forecast value, i.e. the operational effectiveness of using forecasts to support water management, has been only relatively explored. It is comparatively easy to identify the most effective forecasting information needed to design reservoir operation rules for flood control but it is not straightforward to identify which forecast variable and lead time is needed to define effective hedging rules for operational targets with slow dynamics such as water supply. The task is even more complex when multiple targets, with diverse slow and fast dynamics, are considered at the same time. In these cases, the relative importance of different pieces of information, e.g. magnitude and timing of peak flow rate and accumulated inflow on different time lags, may vary depending on the season or the hydrological conditions. In this work, we analyze the relationship between operational forecast value and streamflow forecast horizon for different multi-purpose reservoir trade-offs. We use the Information Selection and Assessment (ISA) framework to identify the most effective forecast variables and horizons for informing multi-objective reservoir operation over short- and long-term temporal scales. The ISA framework is an automatic iterative procedure to discriminate the information with the highest potential to improve multi-objective reservoir operating performance. Forecast variables and horizons are selected using a feature

  7. Operational modal analysis applied to the concert harp

    NASA Astrophysics Data System (ADS)

    Chomette, B.; Le Carrou, J.-L.

    2015-05-01

    Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.

  8. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  9. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  10. Impact of resident training on operative time and safety in hemithyroidectomy.

    PubMed

    Folsom, Craig; Serbousek, Kimberly; Lydiatt, William; Rieke, Katherine; Sayles, Harlan; Smith, Russell; Panwar, Aru

    2017-06-01

    The purpose of this study was to present our assessment of the impact of resident participation on operative duration and outcomes after hemithyroidectomy, which may identify opportunities for optimization of educational programs, reduction in cost of healthcare delivery, and maximizing patient safety, while continuing to train a competent physician workforce for the future. The American College of Surgeons' National Surgical Quality Improvement Program (ACS NSQIP) dataset from 2006 to 2012 identified 13,151 adult patients who underwent hemithyroidectomy. Differences in operative duration, postoperative complications, reoperation, and readmission rates were assessed based on stratification by resident participation in surgery. Compared with operations performed by attending surgeons alone, resident participation with attending supervision prolonged the operative duration by 10.5% (82.5 minutes vs 91.2 minutes; p < .0001). The incidence of readmission and wound complications was higher for patients who underwent surgery with resident participation. Resident participation in hemithyroidectomy may be associated with increased operative duration, higher incidence of wound complications, and readmission. © 2017 Wiley Periodicals, Inc. Head Neck 39: 1212-1217, 2017. © 2017 Wiley Periodicals, Inc.

  11. Energy and water quality management systems for water utility's operations: a review.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Oppenheimer, Joan; Bros, Christopher M; Jacangelo, Joseph G

    2015-04-15

    Holistic management of water and energy resources is critical for water utilities facing increasing energy prices, water supply shortage and stringent regulatory requirements. In the early 1990s, the concept of an integrated Energy and Water Quality Management System (EWQMS) was developed as an operational optimization framework for solving water quality, water supply and energy management problems simultaneously. Approximately twenty water utilities have implemented an EWQMS by interfacing commercial or in-house software optimization programs with existing control systems. For utilities with an installed EWQMS, operating cost savings of 8-15% have been reported due to higher use of cheaper tariff periods and better operating efficiencies, resulting in the reduction in energy consumption of ∼6-9%. This review provides the current state-of-knowledge on EWQMS typical structural features and operational strategies and benefits and drawbacks are analyzed. The review also highlights the challenges encountered during installation and implementation of EWQMS and identifies the knowledge gaps that should motivate new research efforts. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. 41 CFR 102-85.175 - Are the standard level services for cleaning, mechanical operation, and maintenance identified in...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Are the standard level services for cleaning, mechanical operation, and maintenance identified in an OA? 102-85.175 Section 102-85.175 Public Contracts and Property Management Federal Property Management Regulations System (Continued...

  13. Throughput Optimization of Continuous Biopharmaceutical Manufacturing Facilities.

    PubMed

    Garcia, Fernando A; Vandiver, Michael W

    2017-01-01

    In order to operate profitably under different product demand scenarios, biopharmaceutical companies must design their facilities with mass output flexibility in mind. Traditional biologics manufacturing technologies pose operational challenges in this regard due to their high costs and slow equipment turnaround times, restricting the types of products and mass quantities that can be processed. Modern plant design, however, has facilitated the development of lean and efficient bioprocessing facilities through footprint reduction and adoption of disposable and continuous manufacturing technologies. These development efforts have proven to be crucial in seeking to drastically reduce the high costs typically associated with the manufacturing of recombinant proteins. In this work, mathematical modeling is used to optimize annual production schedules for a single-product commercial facility operating with a continuous upstream and discrete batch downstream platform. Utilizing cell culture duration and volumetric productivity as process variables in the model, and annual plant throughput as the optimization objective, 3-D surface plots are created to understand the effect of process and facility design on expected mass output. The model shows that once a plant has been fully debottlenecked it is capable of processing well over a metric ton of product per year. Moreover, the analysis helped to uncover a major limiting constraint on plant performance, the stability of the neutralized viral inactivated pool, which may indicate that this should be a focus of attention during future process development efforts. LAY ABSTRACT: Biopharmaceutical process modeling can be used to design and optimize manufacturing facilities and help companies achieve a predetermined set of goals. One way to perform optimization is by making the most efficient use of process equipment in order to minimize the expenditure of capital, labor and plant resources. To that end, this paper introduces a

  14. GIS based location optimization for mobile produced water treatment facilities in shale gas operations

    NASA Astrophysics Data System (ADS)

    Kitwadkar, Amol Hanmant

    Over 60% of the nation's total energy is supplied by oil and natural gas together and this demand for energy will continue to grow in the future (Radler et al. 2012). The growing demand is pushing the exploration and exploitation of onshore oil and natural gas reservoirs. Hydraulic fracturing has proven to not only create jobs and achieve economic growth, but also has proven to exert a lot of stress on natural resources---such as water. As water is one of the most important factors in the world of hydraulic fracturing, proper fluids management during the development of a field of operation is perhaps the key element to address a lot of these issues. Almost 30% of the water used during hydraulic fracturing comes out of the well in the form of flowback water during the first month after the well is fractured (Bai et. al. 2012). Handling this large amount of water coming out of the newly fractured wells is one of the major issues as the volume of the water after this period drops off and remains constant for a long time (Bai et. al. 2012) and permanent facilities can be constructed to take care of the water over a longer period. This paper illustrates development of a GIS based tool for optimizing the location of a mobile produced water treatment facility while development is still occurring. A methodology was developed based on a multi criteria decision analysis (MCDA) to optimize the location of the mobile treatment facilities. The criteria for MCDA include well density, ease of access (from roads considering truck hauls) and piping minimization if piping is used and water volume produced. The area of study is 72 square miles east of Greeley, CO in the Wattenberg Field in northeastern Colorado that will be developed for oil and gas production starting in the year 2014. A quarterly analysis is done so that we can observe the effect of future development plans and current circumstances on the location as we move from quarter to quarter. This will help the operators to

  15. Matching relations for optimal entanglement concentration and purification

    PubMed Central

    Kong, Fan-Zhen; Xia, Hui-Zhi; Yang, Ming; Yang, Qing; Cao, Zhuo-Liang

    2016-01-01

    The bilateral controlled NOT (CNOT) operation plays a key role in standard entanglement purification process, but the CNOT operation may not be the optimal joint operation in the sense that the output entanglement is maximized. In this paper, the CNOT operations in both the Schmidt-projection based entanglement concentration and the entanglement purification schemes are replaced with a general joint unitary operation, and the optimal matching relations between the entangling power of the joint unitary operation and the non-maximal entangled channel are found for optimizing the entanglement in- crement or the output entanglement. The result is somewhat counter-intuitive for entanglement concentration. The output entanglement is maximized when the entangling power of the joint unitary operation and the quantum channel satisfy certain relation. There exist a variety of joint operations with non-maximal entangling power that can induce a maximal output entanglement, which will greatly broaden the set of the potential joint operations in entanglement concentration. In addition, the entanglement increment in purification process is maximized only by the joint unitary operations (including CNOT) with maximal entangling power. PMID:27189800

  16. Anesthesiologists' and surgeons' perceptions about routine pre-operative testing in low-risk patients: application of the Theoretical Domains Framework (TDF) to identify factors that influence physicians' decisions to order pre-operative tests.

    PubMed

    Patey, Andrea M; Islam, Rafat; Francis, Jill J; Bryson, Gregory L; Grimshaw, Jeremy M

    2012-06-09

    Routine pre-operative tests for anesthesia management are often ordered by both anesthesiologists and surgeons for healthy patients undergoing low-risk surgery. The Theoretical Domains Framework (TDF) was developed to investigate determinants of behaviour and identify potential behaviour change interventions. In this study, the TDF is used to explore anaesthesiologists' and surgeons' perceptions of ordering routine tests for healthy patients undergoing low-risk surgery. Sixteen clinicians (eleven anesthesiologists and five surgeons) throughout Ontario were recruited. An interview guide based on the TDF was developed to identify beliefs about pre-operative testing practices. Content analysis of physicians' statements into the relevant theoretical domains was performed. Specific beliefs were identified by grouping similar utterances of the interview participants. Relevant domains were identified by noting the frequencies of the beliefs reported, presence of conflicting beliefs, and perceived influence on the performance of the behaviour under investigation. Seven of the twelve domains were identified as likely relevant to changing clinicians' behaviour about pre-operative test ordering for anesthesia management. Key beliefs were identified within these domains including: conflicting comments about who was responsible for the test-ordering (Social/professional role and identity); inability to cancel tests ordered by fellow physicians (Beliefs about capabilities and social influences); and the problem with tests being completed before the anesthesiologists see the patient (Beliefs about capabilities and Environmental context and resources). Often, tests were ordered by an anesthesiologist based on who may be the attending anesthesiologist on the day of surgery while surgeons ordered tests they thought anesthesiologists may need (Social influences). There were also conflicting comments about the potential consequences associated with reducing testing, from negative

  17. Application of Multi-Objective Human Learning Optimization Method to Solve AC/DC Multi-Objective Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Yan, Zheng; He, Guangyu

    2016-06-01

    This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.

  18. Vehicle System Integration, Optimization, and Robustness

    Science.gov Websites

    Operations Technology Exchange Initiating Partnerships University Partners Government Partners Industry Contacts Researchers Thrust Area 5: Vehicle System Integration, Optimization, and Robustness Thrust Area only optimal design of the vehicle components, but also an optimization of the interactions between

  19. SU-E-T-266: Development of Evaluation System of Optimal Synchrotron Controlling Parameter for Spot Scanning Proton Therapy with Multiple Gate Irradiations in One Operation Cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, T; Fujii, Y; Hitachi Ltd., Hitachi-shi, Ibaraki

    2015-06-15

    Purpose: We have developed a gated spot scanning proton beam therapy system with real-time tumor-tracking. This system has the ability of multiple-gated irradiation in a single synchrotron operation cycle controlling the wait-time for consecutive gate signals during a flat-top phase so that the decrease in irradiation efficiency induced by irregular variation of gate signal is reduced. Our previous studies have shown that a 200 ms wait-time is appropriate to increase the average irradiation efficiency, but the optimal wait-time can vary patient by patient and day by day. In this research, we have developed an evaluation system of the optimal wait-timemore » in each irradiation based on the log data of the real-time-image gated proton beam therapy (RGPT) system. Methods: The developed system consists of logger for operation of RGPT system and software for evaluation of optimal wait-time. The logger records timing of gate on/off, timing and the dose of delivered beam spots, beam energy and timing of X-ray irradiation. The evaluation software calculates irradiation time in the case of different wait-time by simulating the multiple-gated irradiation operation using several timing information. Actual data preserved in the log data are used for gate on and off time, spot irradiation time, and time moving to the next spot. Design values are used for the acceleration and deceleration times. We applied this system to a patient treated with the RGPT system. Results: The evaluation system found the optimal wait-time of 390 ms that reduced the irradiation time by about 10 %. The irradiation time with actual wait-time used in treatment was reproduced with accuracy of 0.2 ms. Conclusion: For spot scanning proton therapy system with multiple-gated irradiation in one synchrotron operation cycle, an evaluation system of the optimal wait-time in each irradiation based on log data has been developed. Funding Support: Japan Society for the Promotion of Science (JSPS) through the

  20. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  1. Toward a new spacecraft optimal design lifetime? Impact of marginal cost of durability and reduced launch price

    NASA Astrophysics Data System (ADS)

    Snelgrove, Kailah B.; Saleh, Joseph Homer

    2016-10-01

    The average design lifetime of satellites continues to increase, in part due to the expectation that the satellite cost per operational day decreases monotonically with increased design lifetime. In this work, we challenge this expectation by revisiting the durability choice problem for spacecraft in the face of reduced launch price and under various cost of durability models. We first provide a brief overview of the economic thought on durability and highlight its limitations as they pertain to our problem (e.g., the assumption of zero marginal cost of durability). We then investigate the merging influence of spacecraft cost of durability and launch price, and we identify conditions that give rise cost-optimal design lifetimes that are shorter than the longest lifetime technically achievable. For example, we find that high costs of durability favor short design lifetimes, and that under these conditions the optimal choice is relatively robust to reduction in launch prices. By contrast, lower costs of durability favor longer design lifetimes, and the optimal choice is highly sensitive to reduction in launch price. In both cases, reduction in launch prices translates into reduction of the optimal design lifetime. Our results identify a number of situations for which satellite operators would be better served by spacecraft with shorter design lifetimes. Beyond cost issues and repeat purchases, other implications of long design lifetime include the increased risk of technological slowdown given the lower frequency of purchases and technology refresh, and the increased risk for satellite operators that the spacecraft will be technologically obsolete before the end of its life (with the corollary of loss of value and competitive advantage). We conclude with the recommendation that, should pressure to extend spacecraft design lifetime continue, satellite manufacturers should explore opportunities to lease their spacecraft to operators, or to take a stake in the ownership

  2. Towards Robust Designs Via Multiple-Objective Optimization Methods

    NASA Technical Reports Server (NTRS)

    Man Mohan, Rai

    2006-01-01

    Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The

  3. Integrating a Typhoon Event Database with an Optimal Flood Operation Model on the Real-Time Flood Control of the Tseng-Wen Reservoir

    NASA Astrophysics Data System (ADS)

    Chen, Y. W.; Chang, L. C.

    2012-04-01

    Typhoons which normally bring a great amount of precipitation are the primary natural hazard in Taiwan during flooding season. Because the plentiful rainfall quantities brought by typhoons are normally stored for the usage of the next draught period, the determination of release strategies for flood operation of reservoirs which is required to simultaneously consider not only the impact of reservoir safety and the flooding damage in plain area but also for the water resource stored in the reservoir after typhoon becomes important. This study proposes a two-steps study process. First, this study develop an optimal flood operation model (OFOM) for the planning of flood control and also applies the OFOM on Tseng-wun reservoir and the downstream plain related to the reservoir. Second, integrating a typhoon event database with the OFOM mentioned above makes the proposed planning model have ability to deal with a real-time flood control problem and names as real-time flood operation model (RTFOM). Three conditions are considered in the proposed models, OFOM and RTFOM, include the safety of the reservoir itself, the reservoir storage after typhoons and the impact of flooding in the plain area. Besides, the flood operation guideline announced by government is also considered in the proposed models. The these conditions and the guideline can be formed as an optimization problem which is solved by the genetic algorithm (GA) in this study. Furthermore, a distributed runoff model, kinematic-wave geomorphic instantaneous unit hydrograph (KW-GIUH), and a river flow simulation model, HEC-RAS, are used to simulate the river water level of Tseng-wun basin in the plain area and the simulated level is shown as an index of the impact of flooding. Because the simulated levels are required to re-calculate iteratively in the optimization model, applying a recursive artificial neural network (recursive ANN) instead of the HEC-RAS model can significantly reduce the computational burden of

  4. Optimizing noise control strategy in a forging workshop.

    PubMed

    Razavi, Hamideh; Ramazanifar, Ehsan; Bagherzadeh, Jalal

    2014-01-01

    In this paper, a computer program based on a genetic algorithm is developed to find an economic solution for noise control in a forging workshop. Initially, input data, including characteristics of sound sources, human exposure, abatement techniques, and production plans are inserted into the model. Using sound pressure levels at working locations, the operators who are at higher risk are identified and picked out for the next step. The program is devised in MATLAB such that the parameters can be easily defined and changed for comparison. The final results are structured into 4 sections that specify an appropriate abatement method for each operator and machine, minimum allowance time for high-risk operators, required damping material for enclosures, and minimum total cost of these treatments. The validity of input data in addition to proper settings in the optimization model ensures the final solution is practical and economically reasonable.

  5. Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  6. Resource Costs Give Optimization the Edge

    Treesearch

    C.M. Eddins

    1996-01-01

    To optimize or not to optimize - that is the question practically every sawmill has considered at some time or another. Edger and trimmer optimization is a particularly hot topic, as these are among the most wasteful areas of the sawmill because trimmer and edger operators traditionally tend to over edge or trim. By its very definition, optimizing equipment seeks to...

  7. Remediation System Design Optimization: Field Demonstration at the Umatilla Army Deport

    NASA Astrophysics Data System (ADS)

    Zheng, C.; Wang, P. P.

    2002-05-01

    Since the early 1980s, many researchers have shown that the simulation-optimization (S/O) approach is superior to the traditional trial-and-error method for designing cost-effective groundwater pump-and-treat systems. However, the application of the S/O approach to real field problems has remained limited. This paper describes the application of a new general simulation-optimization code to optimize an existing pump-and-treat system at the Umatilla Army Depot in Oregon, as part of a field demonstration project supported by the Environmental Security Technology Certification Program (ESTCP). Two optimization formulations were developed to minimize the total capital and operational costs under the current and possibly expanded treatment plant capacities. A third formulation was developed to minimize the total contaminant mass of RDX and TNT remaining in the shallow aquifer by the end of the project duration. For the first two formulations, this study produced an optimal pumping strategy that would achieve the cleanup goal in 4 years with a total cost of 1.66 million US dollars in net present value. For comparison, the existing design in operation was calculated to require 17 years for cleanup with a total cost of 3.83 million US dollars in net present value. Thus, the optimal pumping strategy represents a reduction of 13 years in cleanup time and a reduction of 56.6 percent in the expected total expenditure. For the third formulation, this study identified an optimal dynamic pumping strategy that would reduce the total mass remaining in the shallow aquifer by 89.5 percent compared with that calculated for the existing design. In spite of their intensive computational requirements, this study shows that the global optimization techniques including tabu search and genetic algorithms can be applied successfully to large-scale field problems involving multiple contaminants and complex hydrogeological conditions.

  8. Switching and optimizing control for coal flotation process based on a hybrid model

    PubMed Central

    Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang

    2017-01-01

    Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305

  9. Model-based identification of optimal operating conditions for amino acid simulated moving bed enantioseparation using a macrocyclic glycopeptide stationary phase.

    PubMed

    Fuereder, Markus; Majeed, Imthiyas N; Panke, Sven; Bechtold, Matthias

    2014-06-13

    Teicoplanin aglycone columns allow efficient separation of amino acid enantiomers in aqueous mobile phases and enable robust and predictable simulated moving bed (SMB) separation of racemic methionine despite a dependency of the adsorption behavior on the column history (memory effect). In this work we systematically investigated the influence of the mobile phase (methanol content) and temperature on SMB performance using a model-based optimization approach that accounts for methionine solubility, adsorption behavior and back pressure. Adsorption isotherms became more favorable with increasing methanol content but methionine solubility was decreased and back pressure increased. Numerical optimization suggested a moderate methanol content (25-35%) for most efficient operation. Higher temperature had a positive effect on specific productivity and desorbent requirement due to higher methionine solubility, lower back pressure and virtually invariant selectivity at high loadings of racemic methionine. However, process robustness (defined as a difference in flow rate ratios) decreased strongly with increasing temperature to the extent that any significant increase in temperature over 32°C will likely result in operating points that cannot be realized technically even with the lab-scale piston pump SMB system employed in this study. Copyright © 2014. Published by Elsevier B.V.

  10. Optimal Cut-Offs of Homeostasis Model Assessment of Insulin Resistance (HOMA-IR) to Identify Dysglycemia and Type 2 Diabetes Mellitus: A 15-Year Prospective Study in Chinese.

    PubMed

    Lee, C H; Shih, A Z L; Woo, Y C; Fong, C H Y; Leung, O Y; Janus, E; Cheung, B M Y; Lam, K S L

    The optimal reference range of homeostasis model assessment of insulin resistance (HOMA-IR) in normal Chinese population has not been clearly defined. Here we address this issue using the Hong Kong Cardiovascular Risk Factor Prevalence Study (CRISPS), a prospective population-based cohort study with long-term follow-up. In this study, normal glucose tolerance (NGT), impaired fasting glucose (IFG), impaired glucose tolerance (IGT) and type 2 diabetes mellitus (T2DM) were defined according to the 1998 World Health Organization criteria. Dysglycemia referred to IFG, IGT or T2DM. This study comprised two parts. Part one was a cross-sectional study involving 2,649 Hong Kong Chinese subjects, aged 25-74 years, at baseline CRISPS-1 (1995-1996). The optimal HOMA-IR cut-offs for dysglycemia and T2DM were determined by the receiver-operating characteristic (ROC) curve. Part two was a prospective study involving 872 subjects who had persistent NGT at CRISPS-4 (2010-2012) after 15 years of follow-up. At baseline, the optimal HOMA-IR cut-offs to identify dysglyceia and T2DM were 1.37 (AUC = 0.735; 95% confidence interval [CI] = 0.713-0.758; Sensitivity [Se] = 65.6%, Specificity [Sp] = 71.3%] and 1.97 (AUC = 0.807; 95% CI = 0.777-0.886; Se = 65.5%, Sp = 82.9%) respectively. These cut-offs, derived from the cross-sectional study at baseline, corresponded closely to the 75th (1.44) and 90th (2.03) percentiles, respectively, of the HOMA-IR reference range derived from the prospective study of subjects with persistent NGT. HOMA-IR cut-offs, of 1.4 and 2.0, which discriminated dysglycemia and T2DM respectively from NGT in Southern Chinese, can be usefully employed as references in clinical research involving the assessment of insulin resistance.

  11. Increasing time to postoperative stereotactic radiation therapy for patients with resected brain metastases: investigating clinical outcomes and identifying predictors associated with time to initiation.

    PubMed

    Yusuf, Mehran B; Amsbaugh, Mark J; Burton, Eric; Nelson, Megan; Williams, Brian; Koutourousiou, Maria; Nauta, Haring; Woo, Shiao

    2018-02-01

    We sought to determine the impact of time to initiation (TTI) of post-operative radiosurgery on clinical outcomes for patients with resected brain metastases and to identify predictors associated with TTI. All patients with resected brain metastases treated with postoperative SRS or fractionated stereotactic radiation therapy (fSRT) from 2012 to 2016 at a single institution were reviewed. TTI was defined as the interval from resection to first day of radiosurgery. Receiver operating characteristic (ROC) curves were used to identify an optimal threshold for TTI with respect to local failure (LF). Survival outcomes were estimated using the Kaplan-Meier method and analyzed using the log-rank test and Cox proportional hazards models. Logistic regression models were used to identify factors associated with ROC-determined TTI covariates. A total of 79 resected lesions from 73 patients were evaluated. An ROC curve of LF and TTI identified an optimal threshold for TTI of 30.5 days, with an area under the curve of 0.637. TTI > 30 days was associated with an increased hazard of LF (HR 4.525, CI 1.239-16.527) but was not significantly associated with survival (HR 1.002, CI 0.547-1.823) or distant brain failure (DBF, HR 1.943, CI 0.989-3.816). Fifteen patients (20.5%) required post-operative inpatient rehabilitation. Post-operative rehabilitation was associated with TTI > 30 days (OR 1.48, CI 1.142-1.922). In our study of resected brain metastases, longer time to initiation of post-operative radiosurgery was associated with increased local failure. Ideally, post-op SRS should be initiated within 30 days of resection if feasible.

  12. Computer package for the design and optimization of absorption air conditioning system operated by solar energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofrata, H.; Khoshaim, B.; Megahed, M.

    1980-12-01

    In this paper a computer package for the design and optimization of the simple Li-Br absorption air conditioning system, operated by solar energy, is developed in order to study its performance. This was necessary, as a first step, before carrying out any computations regarding the dual system (1-3). The computer package has the facilities of examining any parameter which may control the system; namely generator, evaporator, condenser, absorber temperatures and pumping factor. The output may be tabulated and also fed to the graph plotter. The flow chart of the programme is explained in an easy way and a typical examplemore » is included.« less

  13. Optimization of a near-field thermophotovoltaic system operating at low temperature and large vacuum gap

    NASA Astrophysics Data System (ADS)

    Lim, Mikyung; Song, Jaeman; Kim, Jihoon; Lee, Seung S.; Lee, Ikjin; Lee, Bong Jae

    2018-05-01

    The present work successfully achieves a strong enhancement in performance of a near-field thermophotovoltaic (TPV) system operating at low temperature and large-vacuum-gap width by introducing a hyperbolic-metamaterial (HMM) emitter, multilayered graphene, and an Au-backside reflector. Design variables for the HMM emitter and the multilayered-graphene-covered TPV cell are optimized for maximizing the power output of the near-field TPV system with the genetic algorithm. The near-field TPV system with the optimized configuration results in 24.2 times of enhancement in power output compared with that of the system with a bulk emitter and a bare TPV cell. Through the analysis of the radiative heat transfer together with surface-plasmon-polariton (SPP) dispersion curves, it is found that coupling of SPPs generated from both the HMM emitter and the multilayered-graphene-covered TPV cell plays a key role in a substantial increase in the heat transfer even at a 200-nm vacuum gap. Further, the backside reflector at the bottom of the TPV cell significantly increases not only the conversion efficiency, but also the power output by generating additional polariton modes which can be readily coupled with the existing SPPs of the HMM emitter and the multilayered-graphene-covered TPV cell.

  14. Discriminative motif optimization based on perceptron training

    PubMed Central

    Patel, Ronak Y.; Stormo, Gary D.

    2014-01-01

    Motivation: Generating accurate transcription factor (TF) binding site motifs from data generated using the next-generation sequencing, especially ChIP-seq, is challenging. The challenge arises because a typical experiment reports a large number of sequences bound by a TF, and the length of each sequence is relatively long. Most traditional motif finders are slow in handling such enormous amount of data. To overcome this limitation, tools have been developed that compromise accuracy with speed by using heuristic discrete search strategies or limited optimization of identified seed motifs. However, such strategies may not fully use the information in input sequences to generate motifs. Such motifs often form good seeds and can be further improved with appropriate scoring functions and rapid optimization. Results: We report a tool named discriminative motif optimizer (DiMO). DiMO takes a seed motif along with a positive and a negative database and improves the motif based on a discriminative strategy. We use area under receiver-operating characteristic curve (AUC) as a measure of discriminating power of motifs and a strategy based on perceptron training that maximizes AUC rapidly in a discriminative manner. Using DiMO, on a large test set of 87 TFs from human, drosophila and yeast, we show that it is possible to significantly improve motifs identified by nine motif finders. The motifs are generated/optimized using training sets and evaluated on test sets. The AUC is improved for almost 90% of the TFs on test sets and the magnitude of increase is up to 39%. Availability and implementation: DiMO is available at http://stormo.wustl.edu/DiMO Contact: rpatel@genetics.wustl.edu, ronakypatel@gmail.com PMID:24369152

  15. Development of a ginkgo biloba fingerprint chromatogram with UV and evaporative light scattering detection and optimization of the evaporative light scattering detector operating conditions.

    PubMed

    van Nederkassel, A M; Vijverman, V; Massart, D L; Vander Heyden, Y

    2005-09-02

    A fingerprint chromatogram of a standardized Ginkgo biloba extract is developed on a monolithic silica column using a ternary gradient containing water, iso-propanol and tetrahydrofuran. For the detection, UV and evaporative light scattering (ELS) detectors are used, the latter allowing detection of the poor UV absorbing compounds as ginkgolides (A-C and J) and bilobalide in the extract. The complementary information between the UV and ELS fingerprint is evaluated. The ELS detector used in this study can operate in an impactor 'on' or 'off' mode. For each mode, the operating conditions such as the nebulizing gas flow rate, the drift tube temperature and the gain are optimized by use of three-level screening designs to obtain the best signal-to-noise (S/N) ratio in the final ELS fingerprint chromatogram. In both impactor modes, very similar S/N ratios are obtained for the nominal levels of the design. However, optimization of the operating conditions resulted, for both impactor modes, in a significant increase in S/N ratios compared to the initial evaluated conditions, obtained from the detector software.

  16. Optimal Power Flow Pursuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea

    This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less

  17. Optimization of European call options considering physical delivery network and reservoir operation rules

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.

    2011-10-01

    This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.

  18. Fuel management optimization using genetic algorithms and code independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1994-12-31

    Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less

  19. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  20. Principles of a clean operating room environment.

    PubMed

    Howard, James L; Hanssen, Arlen D

    2007-10-01

    Optimizing the operating room environment is necessary to minimize the prevalence of arthroplasty infection. Reduction of bacterial contamination in the operating room should be a primary focus of all members of the operating room team. However, in recent years, there has been a decline in the emphasis of the basic principles of antisepsis in many operating rooms. The purpose of this review is to highlight important considerations for optimizing the operating room environment. These principles should be actively promoted by orthopedic surgeons in their operating rooms as part of a comprehensive approach to minimizing arthroplasty infection.

  1. Optimization of operating parameters of hybrid vertical down-flow constructed wetland systems for domestic sewerage treatment.

    PubMed

    Huang, Zhujian; Zhang, Xianning; Cui, Lihua; Yu, Guangwei

    2016-09-15

    In this work, three hybrid vertical down-flow constructed wetland (HVDF-CW) systems with different compound substrates were fed with domestic sewage and their pollutants removal performance under different hydraulic loading and step-feeding ratio was investigated. The results showed that the hydraulic loading and step-feeding ratio were two crucial factors determining the removal efficiency of most pollutants, while substrate types only significantly affected the removal of COD and NH4(+)-N. Generally, the lower the hydraulic loading, the better removal efficiency of all contaminants, except for TN. By contrast, the increase of step-feeding ratio would slightly reduce the removal rate of ammonium and TP but obviously promoted the TN removal. Therefore, the optimal operation of this CWs could be achieved with low hydraulic loading combined with 50% of step-feeding ratio when TN removal is the priority, whereas medium or low hydraulic loading without step-feeding would be suitable when TN removal is not taken into consideration. The obtained results in this study can provide us with a guideline for design and optimization of hybrid vertical flow constructed wetland systems to improve the pollutants removal from domestic sewage. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Anesthesiologists’ and surgeons’ perceptions about routine pre-operative testing in low-risk patients: application of the Theoretical Domains Framework (TDF) to identify factors that influence physicians’ decisions to order pre-operative tests

    PubMed Central

    2012-01-01

    Background Routine pre-operative tests for anesthesia management are often ordered by both anesthesiologists and surgeons for healthy patients undergoing low-risk surgery. The Theoretical Domains Framework (TDF) was developed to investigate determinants of behaviour and identify potential behaviour change interventions. In this study, the TDF is used to explore anaesthesiologists’ and surgeons’ perceptions of ordering routine tests for healthy patients undergoing low-risk surgery. Methods Sixteen clinicians (eleven anesthesiologists and five surgeons) throughout Ontario were recruited. An interview guide based on the TDF was developed to identify beliefs about pre-operative testing practices. Content analysis of physicians’ statements into the relevant theoretical domains was performed. Specific beliefs were identified by grouping similar utterances of the interview participants. Relevant domains were identified by noting the frequencies of the beliefs reported, presence of conflicting beliefs, and perceived influence on the performance of the behaviour under investigation. Results Seven of the twelve domains were identified as likely relevant to changing clinicians’ behaviour about pre-operative test ordering for anesthesia management. Key beliefs were identified within these domains including: conflicting comments about who was responsible for the test-ordering (Social/professional role and identity); inability to cancel tests ordered by fellow physicians (Beliefs about capabilities and social influences); and the problem with tests being completed before the anesthesiologists see the patient (Beliefs about capabilities and Environmental context and resources). Often, tests were ordered by an anesthesiologist based on who may be the attending anesthesiologist on the day of surgery while surgeons ordered tests they thought anesthesiologists may need (Social influences). There were also conflicting comments about the potential consequences associated with

  3. Optimal Design of Wireless Power Transmission Links for Millimeter-Sized Biomedical Implants.

    PubMed

    Ahn, Dukju; Ghovanloo, Maysam

    2016-02-01

    This paper presents a design methodology for RF power transmission to millimeter-sized implantable biomedical devices. The optimal operating frequency and coil geometries are found such that power transfer efficiency (PTE) and tissue-loss-constrained allowed power are maximized. We define receiver power reception susceptibility (Rx-PRS) and transmitter figure of merit (Tx-FoM) such that their multiplication yields the PTE. Rx-PRS and Tx-FoM define the roles of the Rx and Tx in the PTE, respectively. First, the optimal Rx coil geometry and operating frequency range are identified such that the Rx-PRS is maximized for given implant constraints. Since the Rx is very small and has lesser design freedom than the Tx, the overall operating frequency is restricted mainly by the Rx. Rx-PRS identifies such operating frequency constraint imposed by the Rx. Secondly, the Tx coil geometry is selected such that the Tx-FoM is maximized under the frequency constraint at which the Rx-PRS was saturated. This aligns the target frequency range of Tx optimization with the frequency range at which Rx performance is high, resulting in the maximum PTE. Finally, we have found that even in the frequency range at which the PTE is relatively flat, the tissue loss per unit delivered power can be significantly different for each frequency. The Rx-PRS can predict the frequency range at which the tissue loss per unit delivered power is minimized while PTE is maintained high. In this way, frequency adjustment for the PTE and tissue-loss-constrained allowed power is realized by characterizing the Rx-PRS. The design procedure was verified through full-wave electromagnetic field simulations and measurements using de-embedding method. A prototype implant, 1 mm in diameter, achieved PTE of 0.56% ( -22.5 dB) and power delivered to load (PDL) was 224 μW at 200 MHz with 12 mm Tx-to-Rx separation in the tissue environment.

  4. Multiobjective Particle Swarm Optimization for the optimal design of photovoltaic grid-connected systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kornelakis, Aris

    2010-12-15

    Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Valuemore » (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)« less

  5. Optimization of a Tube Hydroforming Process

    NASA Astrophysics Data System (ADS)

    Abedrabbo, Nader; Zafar, Naeem; Averill, Ron; Pourboghrat, Farhang; Sidhu, Ranny

    2004-06-01

    An approach is presented to optimize a tube hydroforming process using a Genetic Algorithm (GA) search method. The goal of the study is to maximize formability by identifying the optimal internal hydraulic pressure and feed rate while satisfying the forming limit diagram (FLD). The optimization software HEEDS is used in combination with the nonlinear structural finite element code LS-DYNA to carry out the investigation. In particular, a sub-region of a circular tube blank is formed into a square die. Compared to the best results of a manual optimization procedure, a 55% increase in expansion was achieved when using the pressure and feed profiles identified by the automated optimization procedure.

  6. Hybrid Multi-Objective Optimization of Folsom Reservoir Operation to Maximize Storage in Whole Watershed

    NASA Astrophysics Data System (ADS)

    Goharian, E.; Gailey, R.; Maples, S.; Azizipour, M.; Sandoval Solis, S.; Fogg, G. E.

    2017-12-01

    The drought incidents and growing water scarcity in California have a profound effect on human, agricultural, and environmental water needs. California experienced multi-year droughts, which have caused groundwater overdraft and dropping groundwater levels, and dwindling of major reservoirs. These concerns call for a stringent evaluation of future water resources sustainability and security in the state. To answer to this call, Sustainable Groundwater Management Act (SGMA) was passed in 2014 to promise a sustainable groundwater management in California by 2042. SGMA refers to managed aquifer recharge (MAR) as a key management option, especially in areas with high variation in water availability intra- and inter-annually, to secure the refill of underground water storage and return of groundwater quality to a desirable condition. The hybrid optimization of an integrated water resources system provides an opportunity to adapt surface reservoir operations for enhancement in groundwater recharge. Here, to re-operate Folsom Reservoir, objectives are maximizing the storage in the whole American-Cosumnes watershed and maximizing hydropower generation from Folsom Reservoir. While a linear programing (LP) module tends to maximize the total groundwater recharge by distributing and spreading water over suitable lands in basin, a genetic based algorithm, Non-dominated Sorting Genetic Algorithm II (NSGA-II), layer above it controls releases from the reservoir to secure the hydropower generation, carry-over storage in reservoir, available water for replenishment, and downstream water requirements. The preliminary results show additional releases from the reservoir for groundwater recharge during high flow seasons. Moreover, tradeoffs between the objectives describe that new operation performs satisfactorily to increase the storage in the basin, with nonsignificant effects on other objectives.

  7. Surgical team turnover and operative time: An evaluation of operating room efficiency during pulmonary resection.

    PubMed

    Azzi, Alain Joe; Shah, Karan; Seely, Andrew; Villeneuve, James Patrick; Sundaresan, Sudhir R; Shamji, Farid M; Maziak, Donna E; Gilbert, Sebastien

    2016-05-01

    Health care resources are costly and should be used judiciously and efficiently. Predicting the duration of surgical procedures is key to optimizing operating room resources. Our objective was to identify factors influencing operative time, particularly surgical team turnover. We performed a single-institution, retrospective review of lobectomy operations. Univariate and multivariate analyses were performed to evaluate the impact of different factors on surgical time (skin-to-skin) and total procedure time. Staff turnover within the nursing component of the surgical team was defined as the number of instances any nurse had to leave the operating room over the total number of nurses involved in the operation. A total of 235 lobectomies were performed by 5 surgeons, most commonly for lung cancer (95%). On multivariate analysis, percent forced expiratory volume in 1 second, surgical approach, and lesion size had a significant effect on surgical time. Nursing turnover was associated with a significant increase in surgical time (53.7 minutes; 95% confidence interval, 6.4-101; P = .026) and total procedure time (83.2 minutes; 95% confidence interval, 30.1-136.2; P = .002). Active management of surgical team turnover may be an opportunity to improve operating room efficiency when the surgical team is engaged in a major pulmonary resection. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  8. Identifying Balance Measures Most Likely to Identify Recent Falls.

    PubMed

    Criter, Robin E; Honaker, Julie A

    2016-01-01

    Falls sustained by older adults are an increasing health care issue. Early identification of those at risk for falling can lead to successful prevention of falls. Balance complaints are common among individuals who fall or are at risk for falling. The purpose of this study was to evaluate the clinical utility of a multifaceted balance protocol used for fall risk screening, with the hypothesis that this protocol would successfully identify individuals who had a recent fall (within the previous 12 months). This is a retrospective review of 30 individuals who self-referred for a free fall risk screening. Measures included case history, Activities-Specific Balance Confidence Scale, modified Clinical Test of Sensory Interaction on Balance, Timed Up and Go test, and Dynamic Visual Acuity. Statistical analyses were focused on the ability of the test protocol to identify a fall within the past 12 months and included descriptive statistics, clinical utility indices, logistic regression, receiver operating characteristic curve, area under the curve analysis, effect size (Cohen d), and Spearman correlation coefficients. All individuals who self-referred for this free screening had current imbalance complaints, and were typically women (70%), had a mean age of 77.2 years, and had a fear of falling (70%). Almost half (46.7%) reported at least 1 lifetime fall and 40.0% within the past 12 months. Regression analysis suggested that the Timed Up and Go test was the most important indicator of a recent fall. A cutoff score of 12 or more seconds was optimal (sensitivity: 83.3%; specificity: 61.1%). Older adults with current complaints of imbalance have a higher rate of falls, fall-related injury, and fear of falling than the general community-dwelling public. The Timed Up and Go test is useful for determining recent fall history in individuals with imbalance.

  9. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  10. Optimizing Storage and Renewable Energy Systems with REopt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elgqvist, Emma M.; Anderson, Katherine H.; Cutler, Dylan S.

    Under the right conditions, behind the meter (BTM) storage combined with renewable energy (RE) technologies can provide both cost savings and resiliency. Storage economics depend not only on technology costs and avoided utility rates, but also on how the technology is operated. REopt, a model developed at NREL, can be used to determine the optimal size and dispatch strategy for BTM or off-grid applications. This poster gives an overview of three applications of REopt: Optimizing BTM Storage and RE to Extend Probability of Surviving Outage, Optimizing Off-Grid Energy System Operation, and Optimizing Residential BTM Solar 'Plus'.

  11. Optimization of optical systems.

    PubMed

    Champagne, E B

    1966-11-01

    The power signal-to-noise ratios for coherent and noncoherent optical detection are presented, with the expression for noncoherent detection being examined in detail. It is found that for the long range optical system to compete with its microwave counterpart it is necessary to optimize the optical system. The optical system may be optimized by using coherent detection, or noncoherent detection if the signal is the dominate noise factor. A design procedure is presented which, in principle, always allows one to obtain signal shot-noise limited operation with noncoherent detection if pulsed operation is used. The technique should make reasonable extremely long range, high data rate systems of relatively simple design.

  12. Spontaneous swallowing frequency has potential to identify dysphagia in acute stroke.

    PubMed

    Crary, Michael A; Carnaby, Giselle D; Sia, Isaac; Khanna, Anna; Waters, Michael F

    2013-12-01

    Spontaneous swallowing frequency has been described as an index of dysphagia in various health conditions. This study evaluated the potential of spontaneous swallow frequency analysis as a screening protocol for dysphagia in acute stroke. In a cohort of 63 acute stroke cases, swallow frequency rates (swallows per minute [SPM]) were compared with stroke and swallow severity indices, age, time from stroke to assessment, and consciousness level. Mean differences in SPM were compared between patients with versus without clinically significant dysphagia. Receiver operating characteristic curve analysis was used to identify the optimal threshold in SPM, which was compared with a validated clinical dysphagia examination for identification of dysphagia cases. Time series analysis was used to identify the minimally adequate time period to complete spontaneous swallow frequency analysis. SPM correlated significantly with stroke and swallow severity indices but not with age, time from stroke onset, or consciousness level. Patients with dysphagia demonstrated significantly lower SPM rates. SPM differed by dysphagia severity. Receiver operating characteristic curve analysis yielded a threshold of SPM≤0.40 that identified dysphagia (per the criterion referent) with 0.96 sensitivity, 0.68 specificity, and 0.96 negative predictive value. Time series analysis indicated that a 5- to 10-minute sampling window was sufficient to calculate spontaneous swallow frequency to identify dysphagia cases in acute stroke. Spontaneous swallowing frequency presents high potential to screen for dysphagia in acute stroke without the need for trained, available personnel.

  13. Evaluating and optimizing the operation of the hydropower system in the Upper Yellow River: A general LINGO-based integrated framework.

    PubMed

    Si, Yuan; Li, Xiang; Yin, Dongqin; Liu, Ronghua; Wei, Jiahua; Huang, Yuefei; Li, Tiejian; Liu, Jiahong; Gu, Shenglong; Wang, Guangqian

    2018-01-01

    The hydropower system in the Upper Yellow River (UYR), one of the largest hydropower bases in China, plays a vital role in the energy structure of the Qinghai Power Grid. Due to management difficulties, there is still considerable room for improvement in the joint operation of this system. This paper presents a general LINGO-based integrated framework to study the operation of the UYR hydropower system. The framework is easy to use for operators with little experience in mathematical modeling, takes full advantage of LINGO's capabilities (such as its solving capacity and multi-threading ability), and packs its three layers (the user layer, the coordination layer, and the base layer) together into an integrated solution that is robust and efficient and represents an effective tool for data/scenario management and analysis. The framework is general and can be easily transferred to other hydropower systems with minimal effort, and it can be extended as the base layer is enriched. The multi-objective model that represents the trade-off between power quantity (i.e., maximum energy production) and power reliability (i.e., firm output) of hydropower operation has been formulated. With equivalent transformations, the optimization problem can be solved by the nonlinear programming (NLP) solvers embedded in the LINGO software, such as the General Solver, the Multi-start Solver, and the Global Solver. Both simulation and optimization are performed to verify the model's accuracy and to evaluate the operation of the UYR hydropower system. A total of 13 hydropower plants currently in operation are involved, including two pivotal storage reservoirs on the Yellow River, which are the Longyangxia Reservoir and the Liujiaxia Reservoir. Historical hydrological data from multiple years (2000-2010) are provided as input to the model for analysis. The results are as follows. 1) Assuming that the reservoirs are all in operation (in fact, some reservoirs were not operational or did not

  14. Research on crude oil storage and transportation based on optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Xuhua

    2018-04-01

    At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.

  15. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining.

    PubMed

    Salehi, Mojtaba; Bahreininejad, Ardeshir

    2011-08-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.

  16. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining

    PubMed Central

    Salehi, Mojtaba

    2010-01-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020

  17. Improving the performance of surgery-based clinical pathways: a simulation-optimization approach.

    PubMed

    Ozcan, Yasar A; Tànfani, Elena; Testi, Angela

    2017-03-01

    This paper aims to improve the performance of clinical processes using clinical pathways (CPs). The specific goal of this research is to develop a decision support tool, based on a simulation-optimization approach, which identify the proper adjustment and alignment of resources to achieve better performance for both the patients and the health-care facility. When multiple perspectives are present in a decision problem, critical issues arise and often require the balancing of goals. In our approach, meeting patients' clinical needs in a timely manner, and to avoid worsening of clinical conditions, we assess the level of appropriate resources. The simulation-optimization model seeks and evaluates alternative resource configurations aimed at balancing the two main objectives-meeting patient needs and optimal utilization of beds and operating rooms.Using primary data collected at a Department of Surgery of a public hospital located in Genoa, Italy. The simulation-optimization modelling approach in this study has been applied to evaluate the thyroid surgical treatment together with the other surgery-based CPs. The low rate of bed utilization and the long elective waiting lists of the specialty under study indicates that the wards were oversized while the operating room capacity was the bottleneck of the system. The model enables hospital managers determine which objective has to be given priority, as well as the corresponding opportunity costs.

  18. Determination of optimal cutoff value to accurately identify glucose-6-phosphate dehydrogenase-deficient heterozygous female neonates.

    PubMed

    Miao, Jing-Kun; Chen, Qi-Xiong; Bao, Li-Ming; Huang, Yi; Zhang, Juan; Wan, Ke-Xing; Yi, Jing; Wang, Shi-Yi; Zou, Lin; Li, Ting-Yu

    2013-09-23

    Conventional screening tests to assess G6PD deficiency use a low cutoff value of 2.10 U/gHb which may not be adequate for detecting females with heterozygous deficiency. The aim of present study was to determine an appropriate cutoff value with increased sensitivity in identifying G6PD-deficient heterozygous females. G6PD activity analysis was performed on 51,747 neonates using semi-quantitative fluorescent spot test. Neonates suspected with G6PD deficiency were further analyzed using quantitatively enzymatic assay and for common G6PD mutations. The cutoff values of G6PD activity were estimated using the receiver operating characteristic curve. Our results demonstrated that using 2.10 U/g Hb as a cutoff, the sensitivity of the assay to detect female neonates with G6PD heterozygous deficiency was 83.3%, as compared with 97.6% using 2.55 U/g Hb as a cutoff. The high cutoff identified 21% (8/38) of the female neonates with partial G6PD deficiency which were not detected with 2.10 U/g Hb. Our study found that high cutoffs, 2.35 and 2.55 U/g Hb, would increase assay's sensitivity to identify male and female G6PD deficiency neonates, respectively. We established a reliable cutoff value of G6PD activity with increased sensitivity in identifying female newborns with partial G6PD deficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Evaluation of different approaches for identifying optimal sites to predict mean hillslope soil moisture content

    NASA Astrophysics Data System (ADS)

    Liao, Kaihua; Zhou, Zhiwen; Lai, Xiaoming; Zhu, Qing; Feng, Huihui

    2017-04-01

    The identification of representative soil moisture sampling sites is important for the validation of remotely sensed mean soil moisture in a certain area and ground-based soil moisture measurements in catchment or hillslope hydrological studies. Numerous approaches have been developed to identify optimal sites for predicting mean soil moisture. Each method has certain advantages and disadvantages, but they have rarely been evaluated and compared. In our study, surface (0-20 cm) soil moisture data from January 2013 to March 2016 (a total of 43 sampling days) were collected at 77 sampling sites on a mixed land-use (tea and bamboo) hillslope in the hilly area of Taihu Lake Basin, China. A total of 10 methods (temporal stability (TS) analyses based on 2 indices, K-means clustering based on 6 kinds of inputs and 2 random sampling strategies) were evaluated for determining optimal sampling sites for mean soil moisture estimation. They were TS analyses based on the smallest index of temporal stability (ITS, a combination of the mean relative difference and standard deviation of relative difference (SDRD)) and based on the smallest SDRD, K-means clustering based on soil properties and terrain indices (EFs), repeated soil moisture measurements (Theta), EFs plus one-time soil moisture data (EFsTheta), and the principal components derived from EFs (EFs-PCA), Theta (Theta-PCA), and EFsTheta (EFsTheta-PCA), and global and stratified random sampling strategies. Results showed that the TS based on the smallest ITS was better (RMSE = 0.023 m3 m-3) than that based on the smallest SDRD (RMSE = 0.034 m3 m-3). The K-means clustering based on EFsTheta (-PCA) was better (RMSE <0.020 m3 m-3) than these based on EFs (-PCA) and Theta (-PCA). The sampling design stratified by the land use was more efficient than the global random method. Forty and 60 sampling sites are needed for stratified sampling and global sampling respectively to make their performances comparable to the best K

  20. Singularities in Optimal Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  1. Singularities in optimal structural design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  2. Body mass index cut-points to identify cardiometabolic risk in black South Africans.

    PubMed

    Kruger, H Salome; Schutte, Aletta E; Walsh, Corinna M; Kruger, Annamarie; Rennie, Kirsten L

    2017-02-01

    To determine optimal body mass index (BMI) cut-points for the identification of cardiometabolic risk in black South African adults. We performed a cross-sectional study of a weighted sample of healthy black South Africans aged 25-65 years (721 men, 1386 women) from the North West and Free State Provinces. Demographic, lifestyle and anthropometric measures were taken, and blood pressure, fasting serum triglycerides, high-density lipoprotein (HDL) cholesterol and blood glucose were measured. We defined elevated cardiometabolic risk as having three or more risk factors according to international metabolic syndrome criteria. Receiver operating characteristic curves were applied to identify an optimal BMI cut-point for men and women. BMI had good diagnostic performance to identify clustering of three or more risk factors, as well as individual risk factors: low HDL-cholesterol, elevated fasting glucose and triglycerides, with areas under the curve >.6, but not for high blood pressure. Optimal BMI cut-points averaged 22 kg/m 2 for men and 28 kg/m 2 for women, respectively, with better sensitivity in men (44.0-71.9 %), and in women (60.6-69.8 %), compared to a BMI of 30 kg/m 2 (17-19.1, 53-61.4 %, respectively). Men and women with a BMI >22 and >28 kg/m 2 , respectively, had significantly increased probability of elevated cardiometabolic risk after adjustment for age, alcohol use and smoking. In black South African men, a BMI cut-point of 22 kg/m 2 identifies those at cardiometabolic risk, whereas a BMI of 30 kg/m 2 underestimates risk. In women, a cut-point of 28 kg/m 2 , approaching the WHO obesity cut-point, identifies those at risk.

  3. A review of the peri-operative management of paediatric burns: Identifying adverse events.

    PubMed

    Rode, H; Brink, C; Bester, K; Coleman, M P; Baisey, T; Martinez, R

    2016-11-02

    Burn injuries are common in poverty-stricken countries. The majority of patients with large and complex burns are referred to burn centres. Of the children who qualify for admission, according to burn admission criteria, about half require some kind of surgical procedure to obtain skin cover. These range from massive full-thickness fire burns to skin grafts for small, residual unhealed wounds. Burn anaesthetic procedures are of the most difficult to perform and are known for high complication rates. Reasons include peri-operative sepsis, bleeding, issues around thermoregulation, the hypermetabolic state, nutritional and electrolyte issues, inhalation injuries and the amount of movement during procedures to wash patients, change drapes and access different anatomical sites. The appropriate execution of surgery is therefore of the utmost importance for both minor and major procedures. To review the peri-operative management and standard of surgical care of burnt children. This was a retrospective review and analysis of standard peri-operative care of burnt children at Red Cross War Memorial Children's Hospital, Cape Town, South Africa. A total of 558 children were operated on and supervised by the first author. Factors that could adversely affect surgical and anaesthetic outcomes were identified. There were 257 males and 301 females in this study, with an average age of 50.1 months and average weight of 19.5 kg. The total body surface area involved was 1 - 80%, with an average of 23.5%. Inhalational injury was present in 11.3%, pneumonia in 13.1%, wound sepsis in 20.8%, and septicaemia in 9.7%, and organ dysfunction in more than one organ was seen in 6.1%. The average theatre temperature during surgery was 30.0°C. Core temperatures recorded at the start, halfway through and at completion of surgery were 36.9°C, 36.8°C and 36.5°C, respectively. The average preoperative and postoperative haemoglobin levels were 11.28 g/dL and 9.64 g/dL, respectively. Blood loss was

  4. Identifying Methane Sources with an Airborne Pulsed IPDA Lidar System Operating near 1.65 µm

    NASA Astrophysics Data System (ADS)

    Yerasi, A.; Bartholomew, J.; Tandy, W., Jr.; Emery, W. J.

    2016-12-01

    Methane is a powerful greenhouse gas that is predicted to play an important role in future global climate trends. It would therefore be beneficial to locate areas that produce methane in significant amounts so that these trends can be better understood. In this investigation, some initial performance test results of a lidar system called the Advanced Leak Detector Lidar - Natural Gas (ALDL-NG) are discussed. The feasibility of applying its fundamental principle of operation to methane source identification is also explored. The ALDL-NG was originally created by the Ball Aerospace & Technologies Corp. to reveal leaks emanating from pipelines that transport natural gas, which is primarily composed of methane. It operates in a pulsed integrated path differential absorption (IPDA) configuration and it is carried by a piloted, single-engine aircraft. In order to detect the presence of natural gas leaks, the laser wavelengths of its online and offline channels operate in the 1.65 µm region. The functionality of the ALDL-NG was tested during a recent field campaign in Colorado. It was determined that the ambient concentration of methane in the troposphere ( 1.8 ppm) could indeed be retrieved from ALDL-NG data with a lower-than-expected uncertainty ( 0.2 ppm). Furthermore, when the ALDL-NG scanned over areas that were presumed to be methane sources (feedlots, landfills, etc.), significantly higher concentrations of methane were retrieved. These results are intriguing because the ALDL-NG was not specifically designed to observe anything beyond natural gas pipelines. Nevertheless, they strongly indicate that utilizing an airborne pulsed IPDA lidar system operating near 1.65 µm may very well be a viable technique for identifying methane sources. Perhaps future lidar systems could build upon the heritage of the ALDL-NG and measure methane concentration with even better precision for a variety of scientific applications.

  5. Estimating irrigation water demand using an improved method and optimizing reservoir operation for water supply and hydropower generation: a case study of the Xinfengjiang reservoir in southern China

    USGS Publications Warehouse

    Wu, Yiping; Chen, Ji

    2013-01-01

    The ever-increasing demand for water due to growth of population and socioeconomic development in the past several decades has posed a worldwide threat to water supply security and to the environmental health of rivers. This study aims to derive reservoir operating rules through establishing a multi-objective optimization model for the Xinfengjiang (XFJ) reservoir in the East River Basin in southern China to minimize water supply deficit and maximize hydropower generation. Additionally, to enhance the estimation of irrigation water demand from the downstream agricultural area of the XFJ reservoir, a conventional method for calculating crop water demand is improved using hydrological model simulation results. Although the optimal reservoir operating rules are derived for the XFJ reservoir with three priority scenarios (water supply only, hydropower generation only, and equal priority), the river environmental health is set as the basic demand no matter which scenario is adopted. The results show that the new rules derived under the three scenarios can improve the reservoir operation for both water supply and hydropower generation when comparing to the historical performance. Moreover, these alternative reservoir operating policies provide the flexibility for the reservoir authority to choose the most appropriate one. Although changing the current operating rules may influence its hydropower-oriented functions, the new rules can be significant to cope with the increasingly prominent water shortage and degradation in the aquatic environment. Overall, our results and methods (improved estimation of irrigation water demand and formulation of the reservoir optimization model) can be useful for local watershed managers and valuable for other researchers worldwide.

  6. Distribution path robust optimization of electric vehicle with multiple distribution centers

    PubMed Central

    Hao, Wei; He, Ruichun; Jia, Xiaoyan; Pan, Fuquan; Fan, Jing; Xiong, Ruiqi

    2018-01-01

    To identify electrical vehicle (EV) distribution paths with high robustness, insensitivity to uncertainty factors, and detailed road-by-road schemes, optimization of the distribution path problem of EV with multiple distribution centers and considering the charging facilities is necessary. With the minimum transport time as the goal, a robust optimization model of EV distribution path with adjustable robustness is established based on Bertsimas’ theory of robust discrete optimization. An enhanced three-segment genetic algorithm is also developed to solve the model, such that the optimal distribution scheme initially contains all road-by-road path data using the three-segment mixed coding and decoding method. During genetic manipulation, different interlacing and mutation operations are carried out on different chromosomes, while, during population evolution, the infeasible solution is naturally avoided. A part of the road network of Xifeng District in Qingyang City is taken as an example to test the model and the algorithm in this study, and the concrete transportation paths are utilized in the final distribution scheme. Therefore, more robust EV distribution paths with multiple distribution centers can be obtained using the robust optimization model. PMID:29518169

  7. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  8. Routing performance analysis and optimization within a massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  9. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  10. Honing process optimization algorithms

    NASA Astrophysics Data System (ADS)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  11. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  12. Investigation, development and application of optimal output feedback theory. Volume 2: Development of an optimal, limited state feedback outer-loop digital flight control system for 3-D terminal area operation

    NASA Technical Reports Server (NTRS)

    Broussard, J. R.; Halyo, N.

    1984-01-01

    This report contains the development of a digital outer-loop three dimensional radio navigation (3-D RNAV) flight control system for a small commercial jet transport. The outer-loop control system is designed using optimal stochastic limited state feedback techniques. Options investigated using the optimal limited state feedback approach include integrated versus hierarchical control loop designs, 20 samples per second versus 5 samples per second outer-loop operation and alternative Type 1 integration command errors. Command generator tracking techniques used in the digital control design enable the jet transport to automatically track arbitrary curved flight paths generated by waypoints. The performance of the design is demonstrated using detailed nonlinear aircraft simulations in the terminal area, frequency domain multi-input sigma plots, frequency domain single-input Bode plots and closed-loop poles. The response of the system to a severe wind shear during a landing approach is also presented.

  13. Optimizing conceptual aircraft designs for minimum life cycle cost

    NASA Technical Reports Server (NTRS)

    Johnson, Vicki S.

    1989-01-01

    A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.

  14. Optimal planning and design of a renewable energy based supply system for microgrids

    DOE PAGES

    Hafez, Omar; Bhattacharya, Kankar

    2012-03-03

    This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less

  15. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  16. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  17. Optimal integration strategies for a syngas fuelled SOFC and gas turbine hybrid

    NASA Astrophysics Data System (ADS)

    Zhao, Yingru; Sadhukhan, Jhuma; Lanzini, Andrea; Brandon, Nigel; Shah, Nilay

    This article aims to develop a thermodynamic modelling and optimization framework for a thorough understanding of the optimal integration of fuel cell, gas turbine and other components in an ambient pressure SOFC-GT hybrid power plant. This method is based on the coupling of a syngas-fed SOFC model and an associated irreversible GT model, with an optimization algorithm developed using MATLAB to efficiently explore the range of possible operating conditions. Energy and entropy balance analysis has been carried out for the entire system to observe the irreversibility distribution within the plant and the contribution of different components. Based on the methodology developed, a comprehensive parametric analysis has been performed to explore the optimum system behavior, and predict the sensitivity of system performance to the variations in major design and operating parameters. The current density, operating temperature, fuel utilization and temperature gradient of the fuel cell, as well as the isentropic efficiencies and temperature ratio of the gas turbine cycle, together with three parameters related to the heat transfer between subsystems are all set to be controllable variables. Other factors affecting the hybrid efficiency have been further simulated and analysed. The model developed is able to predict the performance characteristics of a wide range of hybrid systems potentially sizing from 2000 to 2500 W m -2 with efficiencies varying between 50% and 60%. The analysis enables us to identify the system design tradeoffs, and therefore to determine better integration strategies for advanced SOFC-GT systems.

  18. Identifying excessive vehicle idling and opportunities for off-road fuel tax credits for stationary operations in the Caltrans fleet, phase 1

    DOT National Transportation Integrated Search

    2011-01-01

    This report documents the research project Identifying Excessive Vehicle Idling and Opportunities for Off-Road Fuel Tax Credits for : Stationary Operations in the Caltrans Fleet - Phase 1, performed in response to a California Department of Tra...

  19. Queue and stack sorting algorithm optimization and performance analysis

    NASA Astrophysics Data System (ADS)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  20. Identifying factors for optimal development of health-related websites: a delphi study among experts and potential future users.

    PubMed

    Schneider, Francine; van Osch, Liesbeth; de Vries, Hein

    2012-02-14

    The Internet has become a popular medium for offering tailored and targeted health promotion programs to the general public. However, suboptimal levels of program use in the target population limit the public health impact of these programs. Optimizing program development is considered as one of the main processes to increase usage rates. To distinguish factors potentially related to optimal development of health-related websites by involving both experts and potential users. By considering and incorporating the opinions of experts and potential users in the development process, involvement in the program is expected to increase, consequently resulting in increased appreciation, lower levels of attrition, and higher levels of sustained use. We conducted a systematic three-round Delphi study through the Internet. Both national and international experts (from the fields of health promotion, health psychology, e-communication, and technical Web design) and potential users were invited via email to participate. During this study an extensive list of factors potentially related to optimal development of health-related websites was identified, by focusing on factors related to layout, general and risk information provision, questionnaire use, additional services, and ease of use. Furthermore, we assessed the extent to which experts and potential users agreed on the importance of these factors. Differences as well as similarities among experts and potentials users were deduced. In total, 20 of 62 contacted experts participated in the first round (32% response rate); 60 of 200 contacted experts (30% response rate) and 210 potential users (95% response rate) completed the second-round questionnaire, and 32 of 60 contacted experts completed the third round (53% response rate). Results revealed important factors consented upon by experts and potential users (eg, ease of use, clear structure, and detailed health information provision), as well as differences regarding

  1. Infusion of innovative technologies for mission operations

    NASA Astrophysics Data System (ADS)

    Donati, Alessandro

    2010-11-01

    The Advanced Mission Concepts and Technologies Office (Mission Technologies Office, MTO for short) at the European Space Operations Centre (ESOC) of ESA is entrusted with research and development of innovative mission operations concepts systems and provides operations support to special projects. Visions of future missions and requests for improvements from currently flying missions are the two major sources of inspiration to conceptualize innovative or improved mission operations processes. They include monitoring and diagnostics, planning and scheduling, resource management and optimization. The newly identified operations concepts are then proved by means of prototypes, built with embedded, enabling technology and deployed as shadow applications in mission operations for an extended validation phase. The technology so far exploited includes informatics, artificial intelligence and operational research branches. Recent outstanding results include artificial intelligence planning and scheduling applications for Mars Express, advanced integrated space weather monitoring system for the Integral space telescope and a suite of growing client applications for MUST (Mission Utilities Support Tools). The research, development and validation activities at the Mission technologies office are performed together with a network of research institutes across Europe. The objective is narrowing the gap between enabling and innovative technology and space mission operations. The paper first addresses samples of technology infusion cases with their lessons learnt. The second part is focused on the process and the methodology used at the Mission technologies office to fulfill its objectives.

  2. Optimal Cut-Off Points on the Health Anxiety Inventory, Illness Attitude Scales and Whiteley Index to Identify Severe Health Anxiety

    PubMed Central

    Hedman, Erik; Lekander, Mats; Ljótsson, Brjánn; Lindefors, Nils; Rück, Christian; Andersson, Gerhard; Andersson, Erik

    2015-01-01

    Background Health anxiety can be viewed as a dimensional phenomenon where severe health anxiety in form of DSM-IV hypochondriasis represents a cut-off where the health anxiety becomes clinically significant. Three of the most reliable and used self-report measures of health anxiety are the Health Anxiety Inventory (HAI), the Illness Attitude Scales (IAS) and the Whiteley Index (WI). Identifying the optimal cut-offs for classification of presence of a diagnosis of severe health anxiety on these measures has several advantages in clinical and research settings. The aim of this study was therefore to investigate the HAI, IAS and WI as proximal diagnostic instruments for severe health anxiety defined as DSM-IV hypochondriasis. Methods We investigated sensitivity, specificity and predictive value on the HAI, IAS and WI using a total of 347 adult participants of whom 158 had a diagnosis of severe health anxiety, 97 had obsessive-compulsive disorder and 92 were healthy non-clinical controls. Diagnostic assessments were conducted using the Anxiety Disorder Interview Schedule. Results Optimal cut-offs for identifying a diagnosis of severe health anxiety was 67 on the HAI, 47 on the IAS, and 5 on the WI. Sensitivity and specificity were high, ranging from 92.6 to 99.4%. Positive and negative predictive values ranged from 91.6 to 99.4% using unadjusted prevalence rates. Conclusions The HAI, IAS and WI have very good properties as diagnostic indicators of severe health anxiety and can be used as cost-efficient proximal estimates of the diagnosis. PMID:25849477

  3. Evaluating and optimizing the operation of the hydropower system in the Upper Yellow River: A general LINGO-based integrated framework

    PubMed Central

    Si, Yuan; Liu, Ronghua; Wei, Jiahua; Huang, Yuefei; Li, Tiejian; Liu, Jiahong; Gu, Shenglong; Wang, Guangqian

    2018-01-01

    The hydropower system in the Upper Yellow River (UYR), one of the largest hydropower bases in China, plays a vital role in the energy structure of the Qinghai Power Grid. Due to management difficulties, there is still considerable room for improvement in the joint operation of this system. This paper presents a general LINGO-based integrated framework to study the operation of the UYR hydropower system. The framework is easy to use for operators with little experience in mathematical modeling, takes full advantage of LINGO’s capabilities (such as its solving capacity and multi-threading ability), and packs its three layers (the user layer, the coordination layer, and the base layer) together into an integrated solution that is robust and efficient and represents an effective tool for data/scenario management and analysis. The framework is general and can be easily transferred to other hydropower systems with minimal effort, and it can be extended as the base layer is enriched. The multi-objective model that represents the trade-off between power quantity (i.e., maximum energy production) and power reliability (i.e., firm output) of hydropower operation has been formulated. With equivalent transformations, the optimization problem can be solved by the nonlinear programming (NLP) solvers embedded in the LINGO software, such as the General Solver, the Multi-start Solver, and the Global Solver. Both simulation and optimization are performed to verify the model’s accuracy and to evaluate the operation of the UYR hydropower system. A total of 13 hydropower plants currently in operation are involved, including two pivotal storage reservoirs on the Yellow River, which are the Longyangxia Reservoir and the Liujiaxia Reservoir. Historical hydrological data from multiple years (2000–2010) are provided as input to the model for analysis. The results are as follows. 1) Assuming that the reservoirs are all in operation (in fact, some reservoirs were not operational or did

  4. Optimization of control gain by operator adjustment

    NASA Technical Reports Server (NTRS)

    Kruse, W.; Rothbauer, G.

    1973-01-01

    An optimal gain was established by measuring errors at 5 discrete control gain settings in an experimental set-up consisting of a 2-dimensional, first-order pursuit tracking task performed by subjects (S's). No significant experience effect on optimum gain setting was found in the first experiment. During the second experiment, in which control gain was continuously adjustable, high experienced S's tended to reach the previously determined optimum gain quite accurately and quickly. Less experienced S's tended to select a marginally optimum gain either below or above the experimentally determined optimum depending on initial control gain setting, although mean settings of both groups were equal. This quick and simple method is recommended for selecting control gains for different control systems and forcing functions.

  5. Minimal investment risk of a portfolio optimization problem with budget and investment concentration constraints

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-02-01

    In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.

  6. Chaos minimization in DC-DC boost converter using circuit parameter optimization

    NASA Astrophysics Data System (ADS)

    Sudhakar, N.; Natarajan, Rajasekar; Gourav, Kumar; Padmavathi, P.

    2017-11-01

    DC-DC converters are prone to several types of nonlinear phenomena including bifurcation, quasi periodicity, intermittency and chaos. These undesirable effects must be controlled for periodic operation of the converter to ensure the stability. In this paper an effective solution to control of chaos in solar fed DC-DC boost converter is proposed. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The obtained results are compared with the operation of traditional boost converter. Further the obtained results with BFA optimized parameter ensures the operations of the converter are within the controllable region. To elaborate the study of bifurcation analysis with optimized and unoptimized parameters are also presented.

  7. Principled negotiation and distributed optimization for advanced air traffic management

    NASA Astrophysics Data System (ADS)

    Wangermann, John Paul

    Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring

  8. Services Acquisition in the Department of Defense: Analysis of Operational and Performance Data to Identify Drivers of Success

    DTIC Science & Technology

    2015-03-24

    improving the disclosure of CPARS program office Audit results (Black et al., 2014, pp. 48–49). Acquisition Research Program Graduate School of...improving the disclosure of CPARS program office audit results (Black et al., 2014, pp. 44–49). Recommendations Based on our conclusions, we identified...Fitzsimmons, J. A., & Fitzsimmons, M. J. (2006). Service management: Operations, strategy, and information technology (5th ed.). New York, NY: McGraw -Hill

  9. The optimization of total laboratory automation by simulation of a pull-strategy.

    PubMed

    Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo

    2015-01-01

    Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.

  10. The GOES-R/JPSS Approach for Identifying Hazardous Low Clouds: Overview and Operational Impacts

    NASA Astrophysics Data System (ADS)

    Calvert, Corey; Pavolonis, Michael; Lindstrom, Scott; Gravelle, Chad; Terborg, Amanda

    2017-04-01

    Low ceiling and visibility is a weather hazard that nearly every forecaster, in nearly every National Weather Service (NWS) Weather Forecast Office (WFO), must regularly address. In addition, national forecast centers such as the Aviation Weather Center (AWC), Alaska Aviation Weather Unit (AAWU) and the Ocean Prediction Center (OPC) are responsible for issuing low ceiling and visibility related products. As such, reliable methods for detecting and characterizing hazardous low clouds are needed. Traditionally, hazardous areas of Fog/Low Stratus (FLS) are identified using a simple stand-alone satellite product that is constructed by subtracting the 3.9 and 11 μm brightness temperatures. However, the 3.9-11 μm brightness temperature difference (BTD) has several major limitations. In an effort to address the limitations of the BTD product, the GOES-R Algorithm Working Group (AWG) developed an approach that fuses satellite, Numerical Weather Prediction (NWP) model, Sea Surface Temperature (SST) analyses, and other data sets (e.g. digital surface elevation maps, surface emissivity maps, and surface type maps) to determine the probability that hazardous low clouds are present using a naïve Bayesian classifier. In addition, recent research has focused on blending geostationary (e.g. GOES-R) and low earth orbit (e.g. JPSS) satellite data to further improve the products. The FLS algorithm has adopted an enterprise approach in that it can utilize satellite data from a variety of current and future operational sensors and NWP data from a variety of models. The FLS products are available in AWIPS/N-AWIPS/AWIPS-II and have been evaluated within NWS operations over the last four years as part of the Satellite Proving Ground. Forecaster feedback has been predominantly positive and references to these products within Area Forecast Discussions (AFD's) indicate that the products are influencing operational forecasts. At the request of the NWS, the FLS products are currently being

  11. Reexamination of optimal quantum state estimation of pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-09-15

    A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less

  12. Optimal Cut-Offs of Homeostasis Model Assessment of Insulin Resistance (HOMA-IR) to Identify Dysglycemia and Type 2 Diabetes Mellitus: A 15-Year Prospective Study in Chinese

    PubMed Central

    Lee, C. H.; Shih, A. Z. L.; Woo, Y. C.; Fong, C. H. Y.; Leung, O. Y.; Janus, E.; Cheung, B. M. Y.; Lam, K. S. L.

    2016-01-01

    Background The optimal reference range of homeostasis model assessment of insulin resistance (HOMA-IR) in normal Chinese population has not been clearly defined. Here we address this issue using the Hong Kong Cardiovascular Risk Factor Prevalence Study (CRISPS), a prospective population-based cohort study with long-term follow-up. Material & Methods In this study, normal glucose tolerance (NGT), impaired fasting glucose (IFG), impaired glucose tolerance (IGT) and type 2 diabetes mellitus (T2DM) were defined according to the 1998 World Health Organization criteria. Dysglycemia referred to IFG, IGT or T2DM. This study comprised two parts. Part one was a cross-sectional study involving 2,649 Hong Kong Chinese subjects, aged 25–74 years, at baseline CRISPS-1 (1995–1996). The optimal HOMA-IR cut-offs for dysglycemia and T2DM were determined by the receiver-operating characteristic (ROC) curve. Part two was a prospective study involving 872 subjects who had persistent NGT at CRISPS-4 (2010–2012) after 15 years of follow-up. Results At baseline, the optimal HOMA-IR cut-offs to identify dysglyceia and T2DM were 1.37 (AUC = 0.735; 95% confidence interval [CI] = 0.713–0.758; Sensitivity [Se] = 65.6%, Specificity [Sp] = 71.3%] and 1.97 (AUC = 0.807; 95% CI = 0.777–0.886; Se = 65.5%, Sp = 82.9%) respectively. These cut-offs, derived from the cross-sectional study at baseline, corresponded closely to the 75th (1.44) and 90th (2.03) percentiles, respectively, of the HOMA-IR reference range derived from the prospective study of subjects with persistent NGT. Conclusions HOMA-IR cut-offs, of 1.4 and 2.0, which discriminated dysglycemia and T2DM respectively from NGT in Southern Chinese, can be usefully employed as references in clinical research involving the assessment of insulin resistance. PMID:27658115

  13. An extension of the receiver operating characteristic curve and AUC-optimal classification.

    PubMed

    Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto

    2012-10-01

    While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.

  14. Advanced Nuclear Technology. Using Technology for Small Modular Reactor Staff Optimization, Improved Effectiveness, and Cost Containment, 3002007071

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loflin, Leonard

    Through this grant, the U.S. Department of Energy (DOE) will review several functional areas within a nuclear power plant, including fire protection, operations and operations support, refueling, training, procurement, maintenance, site engineering, and others. Several functional areas need to be examined since there appears to be no single staffing area or approach that alone has the potential for significant staff optimization at new nuclear power plants. Several of the functional areas will require a review of technology options such as automation, remote monitoring, fleet wide monitoring, new and specialized instrumentation, human factors engineering, risk informed analysis and PRAs, component andmore » system condition monitoring and reporting, just in time training, electronic and automated procedures, electronic tools for configuration management and license and design basis information, etc., that may be applied to support optimization. Additionally, the project will require a review key regulatory issues that affect staffing and could be optimized with additional technology input. Opportunities to further optimize staffing levels and staffing functions by selection of design attributes of physical systems and structures need also be identified. A goal of this project is to develop a prioritized assessment of the functional areas, and R&D actions needed for those functional areas, to provide the best optimization« less

  15. LTI system order reduction approach based on asymptotical equivalence and the Co-operation of biology-related algorithms

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.

    2017-02-01

    A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.

  16. Advanced treatment of municipal wastewater by nanofiltration: Operational optimization and membrane fouling analysis.

    PubMed

    Li, Kun; Wang, Jianxing; Liu, Jibao; Wei, Yuansong; Chen, Meixue

    2016-05-01

    Municipal sewage from an oxidation ditch was treated for reuse by nanofiltration (NF) in this study. The NF performance was optimized, and its fouling characteristics after different operational durations (i.e., 48 and 169hr) were analyzed to investigate the applicability of nanofiltration for water reuse. The optimum performance was achieved when transmembrane pressure=12bar, pH=4 and flow rate=8L/min using a GE membrane. The permeate water quality could satisfy the requirements of water reclamation for different uses and local standards for water reuse in Beijing. Flux decline in the fouling experiments could be divided into a rapid flux decline and a quasi-steady state. The boundary flux theory was used to predict the evolution of permeate flux. The expected operational duration based on the 169-hr experiment was 392.6hr which is 175% longer than that of the 48-hr one. High molecular weight (MW) protein-like substances were suggested to be the dominant foulants after an extended period based on the MW distribution and the fluorescence characteristics. The analyses of infrared spectra and extracellular polymeric substances revealed that the roles of both humic- and polysaccharide-like substances were diminished, while that of protein-like substances were strengthened in the contribution of membrane fouling with time prolonged. Inorganic salts were found to have marginally influence on membrane fouling. Additionally, alkali washing was more efficient at removing organic foulants in the long term, and a combination of water flushing and alkali washing was appropriate for NF fouling control in municipal sewage treatment. Copyright © 2015. Published by Elsevier B.V.

  17. Improved high operating temperature MCT MWIR modules

    NASA Astrophysics Data System (ADS)

    Lutz, H.; Breiter, R.; Figgemeier, H.; Schallenberg, T.; Schirmacher, W.; Wollrab, R.

    2014-06-01

    High operating temperature (HOT) IR-detectors are a key factor to size, weight and power (SWaP) reduced IR-systems. Such systems are essential to provide infantrymen with low-weight handheld systems with increased battery lifetimes or most compact clip-on weapon sights in combination with high electro-optical performance offered by cooled IR-technology. AIM's MCT standard n-on-p technology with vacancy doping has been optimized over many years resulting in MWIR-detectors with excellent electro-optical performance up to operating temperatures of ~120K. In the last years the effort has been intensified to improve this standard technology by introducing extrinsic doping with Gold as an acceptor. As a consequence the dark current could considerably be suppressed and allows for operation at ~140K with good e/o performance. More detailed investigations showed that limitation for HOT > 140K is explained by consequences from rising dark current rather than from defective pixel level. Recently, several crucial parameters were identified showing great promise for further optimization of HOT-performance. Among those, p-type concentration could successfully be reduced from the mid 1016 / cm3 to the lower 1015/ cm3 range. Since AIM is one of the leading manufacturers of split linear cryocoolers, an increase in operating temperature will directly lead to IR-modules with improved SWaP characteristics by making use of the miniature members of its SX cooler family with single piston and balancer technology. The paper will present recent progress in the development of HOT MWIR-detector arrays at AIM and show electro-optical performance data in comparison to focal plane arrays produced in the standard technology.

  18. Contrast research of CDMA and GSM network optimization

    NASA Astrophysics Data System (ADS)

    Wu, Yanwen; Liu, Zehong; Zhou, Guangyue

    2004-03-01

    With the development of mobile telecommunication network, users of CDMA advanced their request of network service quality. While the operators also change their network management object from signal coverage to performance improvement. In that case, reasonably layout & optimization of mobile telecommunication network, reasonably configuration of network resource, improvement of the service quality, and increase the enterprise's core competition ability, all those have been concerned by the operator companies. This paper firstly looked into the flow of CDMA network optimization. Then it dissertated to some keystones in the CDMA network optimization, like PN code assignment, calculation of soft handover, etc. As GSM is also the similar cellular mobile telecommunication system like CDMA, so this paper also made a contrast research of CDMA and GSM network optimization in details, including the similarity and the different. In conclusion, network optimization is a long time job; it will run through the whole process of network construct. By the adjustment of network hardware (like BTS equipments, RF systems, etc.) and network software (like parameter optimized, configuration optimized, capacity optimized, etc.), network optimization work can improve the performance and service quality of the network.

  19. Operator-assisted planning and execution of proximity operations subject to operational constraints

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1991-01-01

    Future multi-vehicle operations will involve multiple scenarios that will require a planning tool for the rapid, interactive creation of fuel-efficient trajectories. The planning process must deal with higher-order, non-linear processes involving dynamics that are often counter-intuitive. The optimization of resulting trajectories can be difficult to envision. An interaction proximity operations planning system is being developed to provide the operator with easily interpreted visual feedback of trajectories and constraints. This system is hosted on an IRIS 4D graphics platform and utilizes the Clohessy-Wiltshire equations. An inverse dynamics algorithm is used to remove non-linearities while the trajectory maneuvers are decoupled and separated in a geometric spreadsheet. The operator has direct control of the position and time of trajectory waypoints to achieve the desired end conditions. Graphics provide the operator with visualization of satisfying operational constraints such as structural clearance, plume impingement, approach velocity limits, and arrival or departure corridors. Primer vector theory is combined with graphical presentation to improve operator understanding of suggested automated system solutions and to allow the operator to review, edit, or provide corrective action to the trajectory plan.

  20. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  1. Operationally Efficient Propulsion System Study (OEPSS) data book. Volume 3: Operations technology

    NASA Technical Reports Server (NTRS)

    Vilja, John O.

    1990-01-01

    The study was initiated to identify operational problems and cost drivers for current propulsion systems and to identify technology and design approaches to increase the operational efficiency and reduce operations costs for future propulsion systems. To provide readily usable data for the Advanced Launch System (ALS) program, the results of the OEPSS study were organized into a series of OEPSS Data Books. This volume describes operations technologies that will enhance operational efficiency of propulsion systems. A total of 15 operations technologies were identified that will eliminate or mitigate operations problems described in Volume 2. A recommended development plan is presented for eight promising technologies that will simplify the propulsion system and reduce operational requirements.

  2. An optimization framework for workplace charging strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yongxi; Zhou, Yan

    2015-03-01

    The workplace charging (WPC) has been recently recognized as the most important secondary charging point next to residential charging for plug-in electric vehicles (PEVs). The current WPC practice is spontaneous and grants every PEV a designated charger, which may not be practical or economic when there are a large number of PEVs present at workplace. This study is the first research undertaken that develops an optimization framework for WPC strategies to satisfy all charging demand while explicitly addressing different eligible levels of charging technology and employees’ demographic distributions. The optimization model is to minimize the lifetime cost of equipment, installations,more » and operations, and is formulated as an integer program. We demonstrate the applicability of the model using numerical examples based on national average data. The results indicate that the proposed optimization model can reduce the total cost of running a WPC system by up to 70% compared to the current practice. The WPC strategies are sensitive to the time windows and installation costs, and dominated by the PEV population size. The WPC has also been identified as an alternative sustainable transportation program to the public transit subsidy programs for both economic and environmental advantages.« less

  3. Water Use Optimization Toolset Project: Development and Demonstration Phase Draft Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasper, John R.; Veselka, Thomas D.; Mahalik, Matthew R.

    2014-05-19

    This report summarizes the results of the development and demonstration phase of the Water Use Optimization Toolset (WUOT) project. It identifies the objective and goals that guided the project, as well as demonstrating potential benefits that could be obtained by applying the WUOT in different geo-hydrologic systems across the United States. A major challenge facing conventional hydropower plants is to operate more efficiently while dealing with an increasingly uncertain water-constrained environment and complex electricity markets. The goal of this 3-year WUOT project, which is funded by the U.S. Department of Energy (DOE), is to improve water management, resulting in moremore » energy, revenues, and grid services from available water, and to enhance environmental benefits from improved hydropower operations and planning while maintaining institutional water delivery requirements. The long-term goal is for the WUOT to be used by environmental analysts and deployed by hydropower schedulers and operators to assist in market, dispatch, and operational decisions.« less

  4. Optimization of metals and plastics recovery from electric cable wastes using a plate-type electrostatic separator.

    PubMed

    Richard, Gontran; Touhami, Seddik; Zeghloul, Thami; Dascalescu, Lucien

    2017-02-01

    Plate-type electrostatic separators are commonly employed for the selective sorting of conductive and non-conductive granular materials. The aim of this work is to identify the optimal operating conditions of such equipment, when employed for separating copper and plastics from either flexible or rigid electric wire wastes. The experiments are performed according to the response surface methodology, on samples composed of either "calibrated" particles, obtained by manually cutting of electric wires at a predefined length (4mm), or actual machine-grinded scraps, characterized by a relatively-wide size distribution (1-4mm). The results point out the effect of particle size and shape on the effectiveness of the electrostatic separation. Different optimal operating conditions are found for flexible and rigid wires. A separate processing of the two classes of wire wastes is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Predictive Analytics for Coordinated Optimization in Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  6. Application of Differential Evolutionary Optimization Methodology for Parameter Structure Identification in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.

    2013-12-01

    With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.

  7. Interactive orbital proximity operations planning system instruction and training guide

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1994-01-01

    This guide instructs users in the operation of a Proximity Operations Planning System. This system uses an interactive graphical method for planning fuel-efficient rendezvous trajectories in the multi-spacecraft environment of the space station and allows the operator to compose a multi-burn transfer trajectory between orbit initial chaser and target trajectories. The available task time (window) of the mission is predetermined and the maneuver is subject to various operational constraints, such as departure, arrival, spatial, plume impingement, and en route passage constraints. The maneuvers are described in terms of the relative motion experienced in a space station centered coordinate system. Both in-orbital plane as well as out-of-orbital plane maneuvering is considered. A number of visual optimization aids are used for assisting the operator in reaching fuel-efficient solutions. These optimization aids are based on the Primer Vector theory. The visual feedback of trajectory shapes, operational constraints, and optimization functions, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool is an example of operator-assisted optimization of nonlinear cost functions.

  8. Optimal and Approximately Optimal Control Policies for Queues in Heavy Traffic,

    DTIC Science & Technology

    1987-03-01

    optimal and ’nearly optimal’ control problems for the open queueing networks in heavy traffic of the type dealt with in the fundamental papers of Reiman ...then the covariance is precisely that obtained by Reiman [1] (with a different notation used there). It is evident from (4.4) and the cited...wU’ ’U, d A K . " -50- References [1] M.I. Reiman , "Open queueing networks in heavy traffic", Math. of Operations Research, 9, 1984, p. 441-458. [2] J

  9. Pricing Resources in LTE Networks through Multiobjective Optimization

    PubMed Central

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889

  10. Pricing resources in LTE networks through multiobjective optimization.

    PubMed

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.

  11. Optimization of EB plant by constraint control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummel, H.K.; de Wit, G.B.C.; Maarleveld, A.

    1991-03-01

    Optimum plant operation can often be achieved by means of constraint control instead of model- based on-line optimization. This is because optimum operation is seldom at the top of the hill but usually at the intersection of constraints. This article describes the development of a constraint control system for a plant producing ethylbenzene (EB) by the Mobil/Badger Ethylbenzene Process. Plant optimization can be defined as the maximization of a profit function describing the economics of the plant. This function contains terms with product values, feedstock prices and operational costs. Maximization of the profit function can be obtained by varying relevantmore » degrees of freedom in the plant, such as a column operating pressure or a reactor temperature. These degrees of freedom can be varied within the available operating margins of the plant.« less

  12. PDE Nozzle Optimization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Billings, Dana; Turner, James E. (Technical Monitor)

    2000-01-01

    Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.

  13. Flow optimization study of a batch microfluidics PET tracer synthesizing device

    PubMed Central

    Elizarov, Arkadij M.; Meinhart, Carl; van Dam, R. Michael; Huang, Jiang; Daridon, Antoine; Heath, James R.; Kolb, Hartmuth C.

    2010-01-01

    We present numerical modeling and experimental studies of flow optimization inside a batch microfluidic micro-reactor used for synthesis of human-scale doses of Positron Emission Tomography (PET) tracers. Novel techniques are used for mixing within, and eluting liquid out of, the coin-shaped reaction chamber. Numerical solutions of the general incompressible Navier Stokes equations along with time-dependent elution scalar field equation for the three dimensional coin-shaped geometry were obtained and validated using fluorescence imaging analysis techniques. Utilizing the approach presented in this work, we were able to identify optimized geometrical and operational conditions for the micro-reactor in the absence of radioactive material commonly used in PET related tracer production platforms as well as evaluate the designed and fabricated micro-reactor using numerical and experimental validations. PMID:21072595

  14. Subthreshold SPICE Model Optimization

    NASA Astrophysics Data System (ADS)

    Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy

    2011-04-01

    The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.

  15. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  16. Optimal area of lateral mass mini-screws implanted in plated cervical laminoplasty: a radiography anatomy study.

    PubMed

    Chen, Hua; Li, Huibo; Deng, Yuxiao; Rong, Xin; Gong, Quan; Li, Tao; Song, Yueming; Liu, Hao

    2017-04-01

    Lateral mass mini-screws used in plated cervical laminoplasty might penetrate into facet joints. The objective is to observe this complication incidence and to identify the optimal areas for 5- and 7-mm-long mini-screws to implant on lateral mass. 47 patients who underwent plated cervical laminoplasty were included. The optimal area for mini-screws implanting was set according to pre-operative 3D CT reconstruction data. Then, each posterior-lateral mass surface was divided into three regions: 7-mm region, 5-mm region, and dangerous area. The mini-screw implanted region was recorded. Post-operative CT images were used to identify whether the mini-screws penetrated into facet joints. 235 mini-plates and 470 lateral mass mini-screws were used in the study. 117 (24.9%) mini-screws penetrated 88 (37.4%) facet joints. The 5-mm-long mini-screw optimal area occupied the upper 72, 65, 65, 64, and 65 % area of the posterior-lateral mass surface for C3-7, while the 7-mm-long mini-screw optimal area encompassed the upper 54, 39, 40, 33, and 32 %. Only 7-mm-long mini-screws were used to fix the plate to the lateral mass. 4 of 240 mini-screws in 7-mm region, 67 of the 179 mini-screws in 5-mm region, and 46 of the 51 mini-screws in dangerous region penetrated into the facet joint. The differences in the rate of facet joint penetration related to region were statistically significant (P < 0.001). The facet joint destruction by mini-screws was not a rare complication in plated cervical laminoplasty. The optimal areas we proposed may help guide the mini-screw implantation positions.

  17. Optimal linear-quadratic control of coupled parabolic-hyperbolic PDEs

    NASA Astrophysics Data System (ADS)

    Aksikas, I.; Moghadam, A. Alizadeh; Forbes, J. F.

    2017-10-01

    This paper focuses on the optimal control design for a system of coupled parabolic-hypebolic partial differential equations by using the infinite-dimensional state-space description and the corresponding operator Riccati equation. Some dynamical properties of the coupled system of interest are analysed to guarantee the existence and uniqueness of the solution of the linear-quadratic (LQ)-optimal control problem. A state LQ-feedback operator is computed by solving the operator Riccati equation, which is converted into a set of algebraic and differential Riccati equations, thanks to the eigenvalues and the eigenvectors of the parabolic operator. The results are applied to a non-isothermal packed-bed catalytic reactor. The LQ-optimal controller designed in the early portion of the paper is implemented for the original nonlinear model. Numerical simulations are performed to show the controller performances.

  18. Parameter Optimization of PAL-XFEL Injector

    NASA Astrophysics Data System (ADS)

    Lee, Jaehyun; Ko, In Soo; Han, Jang-Hui; Hong, Juho; Yang, Haeryong; Min, Chang Ki; Kang, Heung-Sik

    2018-05-01

    A photoinjector is used as the electron source to generate a high peak current and low emittance beam for an X-ray free electron laser (FEL). The beam emittance is one of the critical parameters to determine the FEL performance together with the slice energy spread and the peak current. The Pohang Accelerator Laboratory X-ray Free Electron Laser (PAL-XFEL) was constructed in 2015, and the beam commissioning was carried out in spring 2016. The injector is running routinely for PAL-XFEL user operation. The operational parameters of the injector have been optimized experimentally, and these are somewhat different from the originally designed ones. Therefore, we study numerically the injector parameters based on the empirically optimized parameters and review the present operating condition.

  19. [Levers in Primary Health Care - Identifying Strategic Success Factors for Improved Primary Care in Upper Austria].

    PubMed

    Kriegel, J; Rebhandl, E; Reckwitz, N; Hockl, W

    2016-12-01

    Current and projected general practitioner (GP) and primary care in Austria shows structural and process inadequacies in the quality as well as assurance of healthcare supply. The aim is therefore to develop solution- and patient-oriented measures that take patient-related requirements and medical perspectives into account. Using an effect matrix, subjective expert and user priorities were ascertained, cause and effect relationships were examined, and an expanded circle of success for the optimization of GP and primary care in Upper Austria was developed. Through this, the relevant levers for target-oriented development and optimization of the complex system of GP and primary care in Upper Austria were identified; these are training to become general practitioners, entrepreneurs as well as management and coordination. It is necessary to further adapt the identified levers conceptually and operationally in a targeted approach. This is to be achieved by means of the primary health care (PHC) concept as well as management tools and information and communication technologies (ICT) associated with it. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Efficient dynamic optimization of logic programs

    NASA Technical Reports Server (NTRS)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  1. Performance optimization of an MHD generator with physical constraints

    NASA Technical Reports Server (NTRS)

    Pian, C. C. P.; Seikel, G. R.; Smith, J. M.

    1979-01-01

    A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.

  2. A Data Filter for Identifying Steady-State Operating Points in Engine Flight Data for Condition Monitoring Applications

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Litt, Jonathan S.

    2010-01-01

    This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.

  3. Advanced optimal design concepts for composite material aircraft repair

    NASA Astrophysics Data System (ADS)

    Renaud, Guillaume

    The application of an automated optimization approach for bonded composite patch design is investigated. To do so, a finite element computer analysis tool to evaluate patch design quality was developed. This tool examines both the mechanical and the thermal issues of the problem. The optimized shape is obtained with a bi-quadratic B-spline surface that represents the top surface of the patch. Additional design variables corresponding to the ply angles are also used. Furthermore, a multi-objective optimization approach was developed to treat multiple and uncertain loads. This formulation aims at designing according to the most unfavorable mechanical and thermal loads. The problem of finding the optimal patch shape for several situations is addressed. The objective is to minimize a stress component at a specific point in the host structure (plate) while ensuring acceptable stress levels in the adhesive. A parametric study is performed in order to identify the effects of various shape parameters on the quality of the repair and its optimal configuration. The effects of mechanical loads and service temperature are also investigated. Two bonding methods are considered, as they imply different thermal histories. It is shown that the proposed techniques are effective and inexpensive for analyzing and optimizing composite patch repairs. It is also shown that thermal effects should not only be present in the analysis, but that they play a paramount role on the resulting quality of the optimized design. In all cases, the optimized configuration results in a significant reduction of the desired stress level by deflecting the loads away from rather than over the damage zone, as is the case with standard designs. Furthermore, the automated optimization ensures the safety of the patch design for all considered operating conditions.

  4. Nutrition in peri-operative esophageal cancer management.

    PubMed

    Steenhagen, Elles; van Vulpen, Jonna K; van Hillegersberg, Richard; May, Anne M; Siersema, Peter D

    2017-07-01

    Nutritional status and dietary intake are increasingly recognized as essential areas in esophageal cancer management. Nutritional management of esophageal cancer is a continuously evolving field and comprises an interesting area for scientific research. Areas covered: This review encompasses the current literature on nutrition in the pre-operative, peri-operative, and post-operative phases of esophageal cancer. Both established interventions and potential novel targets for nutritional management are discussed. Expert commentary: To ensure an optimal pre-operative status and to reduce peri-operative complications, it is key to assess nutritional status in all pre-operative esophageal cancer patients and to apply nutritional interventions accordingly. Since esophagectomy results in a permanent anatomical change, a special focus on nutritional strategies is needed in the post-operative phase, including early initiation of enteral feeding, nutritional interventions for post-operative complications, and attention to long-term nutritional intake and status. Nutritional aspects of pre-optimization and peri-operative management should be incorporated in novel Enhanced Recovery After Surgery programs for esophageal cancer.

  5. IRQN award paper: Operational rounds: a practical administrative process to improve safety and clinical services in radiology.

    PubMed

    Donnelly, Lane F; Dickerson, Julie M; Lehkamp, Todd W; Gessner, Kevin E; Moskovitz, Jay; Hutchinson, Sally

    2008-11-01

    As part of a patient safety program in the authors' department of radiology, operational rounds have been instituted. This process consists of radiology leaders' visiting imaging divisions at the site of imaging and discussing frontline employees' concerns about patient safety, the quality of care, and patient and family satisfaction. Operational rounds are executed at a time to optimize the number of attendees. Minutes that describe the issues identified, persons responsible for improvement, and updated improvement plan status are available to employees online. Via this process, multiple patient safety and other issues have been identified and remedied. The authors believe that the process has improved patient safety, the quality of care, and the efficiency of operations. Since the inception of the safety program, the mean number of days between serious safety events involving radiology has doubled. The authors review the background around such walk rounds, describe their particular program, and give multiple illustrative examples of issues identified and improvement plans put in place.

  6. Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems.

    PubMed

    Scholze, Sebastian; Barata, Jose; Stokic, Dragan

    2017-02-24

    Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes.

  7. Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems

    PubMed Central

    Scholze, Sebastian; Barata, Jose; Stokic, Dragan

    2017-01-01

    Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes. PMID:28245564

  8. Unlocking Flexibility: Integrated Optimization and Control of Multienergy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Mancarella, Pierluigi; Monti, Antonello

    Electricity, natural gas, water, and dis trict heating/cooling systems are predominantly planned and operated independently. However, it is increasingly recognized that integrated optimization and control of such systems at multiple spatiotemporal scales can bring significant socioeconomic, operational efficiency, and environmental benefits. Accordingly, the concept of the multi-energy system is gaining considerable attention, with the overarching objectives of 1) uncovering fundamental gains (and potential drawbacks) that emerge from the integrated operation of multiple systems and 2) developing holistic yet computationally affordable optimization and control methods that maximize operational benefits, while 3) acknowledging intrinsic interdependencies and quality-of-service requirements for each provider.

  9. An optimized magnetite microparticle-based phosphopeptide enrichment strategy for identifying multiple phosphorylation sites in an immunoprecipitated protein.

    PubMed

    Huang, Yi; Shi, Qihui; Tsung, Chia-Kuang; Gunawardena, Harsha P; Xie, Ling; Yu, Yanbao; Liang, Hongjun; Yang, Pengyuan; Stucky, Galen D; Chen, Xian

    2011-01-01

    To further improve the selectivity and throughput of phosphopeptide analysis for the samples from real-time cell lysates, here we demonstrate a highly efficient method for phosphopeptide enrichment via newly synthesized magnetite microparticles and the concurrent mass spectrometric analysis. The magnetite microparticles show excellent magnetic responsivity and redispersibility for a quick enrichment of those phosphopeptides in solution. The selectivity and sensitivity of magnetite microparticles in phosphopeptide enrichment are first evaluated by a known mixture containing both phosphorylated and nonphosphorylated proteins. Compared with the titanium dioxide-coated magnetic beads commercially available, our magnetite microparticles show a better specificity toward phosphopeptides. The selectively-enriched phosphopeptides from tryptic digests of β-casein can be detected down to 0.4 fmol μl⁻¹, whereas the recovery efficiency is approximately 90% for monophosphopeptides. This magnetite microparticle-based affinity technology with optimized enrichment conditions is then immediately applied to identify all possible phosphorylation sites on a signal protein isolated in real time from a stress-stimulated mammalian cell culture. A large fraction of peptides eluted from the magnetic particle enrichment step were identified and characterized as either single- or multiphosphorylated species by tandem mass spectrometry. With their high efficiency and utility for phosphopeptide enrichment, the magnetite microparticles hold great potential in the phosphoproteomic studies on real-time samples from cell lysates. Published by Elsevier Inc.

  10. Optimized positioning of autonomous surgical lamps

    NASA Astrophysics Data System (ADS)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  11. Identifying Cost-Effective Water Resources Management Strategies: Watershed Management Optimization Support Tool (WMOST)

    EPA Science Inventory

    The Watershed Management Optimization Support Tool (WMOST) is a public-domain software application designed to aid decision makers with integrated water resources management. The tool allows water resource managers and planners to screen a wide-range of management practices for c...

  12. Orbital operations study. Appendix B: Operational procedures

    NASA Technical Reports Server (NTRS)

    Galvin, D. M.; Mattson, H. L.; True, D. M.; Anderson, N. R.; Mehrbach, E.; Gianformaggio, A.; Steinwachs, W. L.; Turkel, S. H.

    1972-01-01

    Operational procedures for each alternate approach for each interfacing activity of the orbital operations study are presented. The applicability of the procedures to interfacing element pairs is identified.

  13. Novel phased isolation ditch system for enhanced nutrient removal and its optimal operating strategy.

    PubMed

    Hong, K i-Ho; Chang, Duk; Hur, Joon-Moo; Han, Sang-Bae

    2003-01-01

    Phased isolation ditch system with intrachannel clarifier is a simplified novel oxidation ditch system enhancing simultaneous removal of biological nitrogen and phosphorus in municipal wastewater. The system employs two ditches with intra-clarifier, and eliminates external final clarifier, additional preanaerobic reactor, and recycle of sludge and nitrified effluent. Separation of anoxic, anaerobic, and aerobic phases can be accomplished by alternating flow and intermittent aeration. Its pilot-scale system operated at HRTs of 10-21 h, SRTs of 15-41 days, and a cycle times of 2-8 h showed removals of BOD, TN, and TP in the range of mixed liquor temperature above 10 degrees C as high as 88-97, 70-84, and 65-90%, respectively. As the SRTs became longer, the effluent TN decreased dramatically, whereas the effluent TP increased. Higher nitrogen removal was accomplished at shorter cycle times, while better phosphorus removal was achieved in longer cycle times. Optimal system operating strategies maximizing the performance and satisfying both the best nitrogen and phosphorus removals included HRTs ranged 10-14 h, SRTs ranged 25-30 days, and a cycle time of 4 h at the mixed liquor temperature above 10 degrees C. Thus, complete phase separation in a cycle maximizing phosphorus release and uptake as well as nitrification and denitrification was accomplished by scheduling of alternating flow and intermittent aeration in the simplified process scheme. Especially, temporal phase separation for phosphorus release without additional anaerobic reactor was successfully accomplished during anaerobic period without any nitrate interference and carbon-limiting.

  14. Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio

    2013-04-01

    In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and

  15. The Body of Knowledge & Content Framework. Identifying the Important Knowledge Required for Productive Performance of a Plastics Machine Operator. Blow Molding, Extrusion, Injection Molding, Thermoforming.

    ERIC Educational Resources Information Center

    Society of the Plastics Industry, Inc., Washington, DC.

    Designed to guide training and curriculum development to prepare machine operators for the national certification exam, this publication identifies the important knowledge required for productive performance by a plastics machine operator. Introductory material discusses the rationale for a national standard, uses of the Body of Knowledge,…

  16. Stillwater Hybrid Geo-Solar Power Plant Optimization Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Daniel S.; Mines, Gregory L.; Turchi, Craig S.

    2015-09-02

    The Stillwater Power Plant is the first hybrid plant in the world able to bring together a medium-enthalpy geothermal unit with solar thermal and solar photovoltaic systems. Solar field and power plant models have been developed to predict the performance of the Stillwater geothermal / solar-thermal hybrid power plant. The models have been validated using operational data from the Stillwater plant. A preliminary effort to optimize performance of the Stillwater hybrid plant using optical characterization of the solar field has been completed. The Stillwater solar field optical characterization involved measurement of mirror reflectance, mirror slope error, and receiver position error.more » The measurements indicate that the solar field may generate 9% less energy than the design value if an appropriate tracking offset is not employed. A perfect tracking offset algorithm may be able to boost the solar field performance by about 15%. The validated Stillwater hybrid plant models were used to evaluate hybrid plant operating strategies including turbine IGV position optimization, ACC fan speed and turbine IGV position optimization, turbine inlet entropy control using optimization of multiple process variables, and mixed working fluid substitution. The hybrid plant models predict that each of these operating strategies could increase net power generation relative to the baseline Stillwater hybrid plant operations.« less

  17. Optimization of joint energy micro-grid with cold storage

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Luo, Simin; Tian, Yan; Chen, Xianda; Xiong, Botao; Zhou, Bowen

    2018-02-01

    To accommodate distributed photovoltaic (PV) curtailment, to make full use of the joint energy micro-grid with cold storage, and to reduce the high operating costs, the economic dispatch of joint energy micro-grid load is particularly important. Considering the different prices during the peak and valley durations, an optimization model is established, which takes the minimum production costs and PV curtailment fluctuations as the objectives. Linear weighted sum method and genetic-taboo Particle Swarm Optimization (PSO) algorithm are used to solve the optimization model, to obtain optimal power supply output. Taking the garlic market in Henan as an example, the simulation results show that considering distributed PV and different prices in different time durations, the optimization strategies are able to reduce the operating costs and accommodate PV power efficiently.

  18. Optimization in fractional aircraft ownership

    NASA Astrophysics Data System (ADS)

    Septiani, R. D.; Pasaribu, H. M.; Soewono, E.; Fayalita, R. A.

    2012-05-01

    Fractional Aircraft Ownership is a new concept in flight ownership management system where each individual or corporation may own a fraction of an aircraft. In this system, the owners have privilege to schedule their flight according to their needs. Fractional management companies (FMC) manages all aspects of aircraft operations, including utilization of FMC's aircraft in combination of outsourced aircrafts. This gives the owners the right to enjoy the benefits of private aviations. However, FMC may have complicated business requirements that neither commercial airlines nor charter airlines faces. Here, optimization models are constructed to minimize the number of aircrafts in order to maximize the profit and to minimize the daily operating cost. In this paper, three kinds of demand scenarios are made to represent different flight operations from different types of fractional owners. The problems are formulated as an optimization of profit and a daily operational cost to find the optimum flight assignments satisfying the weekly and daily demand respectively from the owners. Numerical results are obtained by Genetic Algorithm method.

  19. Optimization of ground-water withdrawal at the old O-Field area, Aberdeen Proving Ground, Maryland

    USGS Publications Warehouse

    Banks, William S.L.; Dillow, Jonathan J.A.

    2001-01-01

    The U.S. Army disposed of chemical agents, laboratory materials, and unexploded ordnance at the Old O-Field landfill at Aberdeen Proving Ground, Maryland, beginning prior to World War II and continuing until at least the 1950?s. Soil, ground water, surface water, and wetland sediments in the Old O-Field area were contaminated by the disposal of these materials. The site is in the Atlantic Coastal Plain, and is characterized by a complex series of Pleistocene and Holocene sediments formed in various fluvial, estuarine, and marine-marginal hydrogeologic environments. A previously constructed transient finite-difference ground-water-flow model was used to simulate ground-water flow and the effects of a pump-and-treat remediation system designed to prevent contaminated ground water from flowing into Watson Creek (a tidal estuary and a tributary to the Gunpowder River). The remediation system consists of 14 extraction wells located between the Old O-Field landfill and Watson Creek.Linear programming techniques were applied to the results of the flow-model simulations to identify optimal pumping strategies for the remediation system. The optimal management objective is to minimize total withdrawal from the water-table aquifer, while adhering to the following constraints: (1) ground-water flow from the landfill should be prevented from reaching Watson Creek, (2) no extraction pump should be operated at a rate that exceeds its capacity, and (3) no extraction pump should be operated at a rate below its minimum capacity, the minimum rate at which an Old O-Field pump can function. Water withdrawal is minimized by varying the rate and frequency of pumping at each of the 14 extraction wells over time. This minimizes the costs of both pumping and water treatment, thus providing the least-cost remediation alternative while simultaneously meeting all operating constraints.The optimal strategy identified using this objective and constraint set involved operating 13 of the 14

  20. On the physical operation and optimization of the p-GaN gate in normally-off GaN HEMT devices

    NASA Astrophysics Data System (ADS)

    Efthymiou, L.; Longobardi, G.; Camuso, G.; Chien, T.; Chen, M.; Udrea, F.

    2017-03-01

    In this study, an investigation is undertaken to determine the effect of gate design parameters on the on-state characteristics (threshold voltage and gate turn-on voltage) of pGaN/AlGaN/GaN high electron mobility transistors (HEMTs). Design parameters considered are pGaN doping and gate metal work function. The analysis considers the effects of variations on these parameters using a TCAD model matched with experimental results. A better understanding of the underlying physics governing the operation of these devices is achieved with a view to enable better optimization of such gate designs.

  1. Optimization of operating parameters for gas-phase photocatalytic splitting of H2S by novel vermiculate packed tubular reactor.

    PubMed

    Preethi, V; Kanmani, S

    2016-10-01

    Hydrogen production by gas-phase photocatalytic splitting of Hydrogen Sulphide (H2S) was investigated on four semiconductor photocatalysts including CuGa1.6Fe0.4O2, ZnFe2O3, (CdS + ZnS)/Fe2O3 and Ce/TiO2. The CdS and ZnS coated core shell particles (CdS + ZnS)/Fe2O3 shows the highest rate of hydrogen (H2) production under optimized conditions. Packed bed tubular reactor was used to study the performance of prepared photocatalysts. Selection of the best packing material is a key for maximum removal efficiency. Cheap, lightweight and easily adsorbing vermiculate materials were used as a novel packing material and were found to be effective in splitting H2S. Effect of various operating parameters like flow rate, sulphide concentration, catalyst dosage, light irradiation were tested and optimized for maximum H2 conversion of 92% from industrial waste H2S. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Optimization of Multiple and Multipurpose Reservoir System Operations by Using Matrix Structure (Case Study: Karun and Dez Reservoir Dams)

    PubMed Central

    Othman, Faridah; Taghieh, Mahmood

    2016-01-01

    Optimal operation of water resources in multiple and multipurpose reservoirs is very complicated. This is because of the number of dams, each dam’s location (Series and parallel), conflict in objectives and the stochastic nature of the inflow of water in the system. In this paper, performance optimization of the system of Karun and Dez reservoir dams have been studied and investigated with the purposes of hydroelectric energy generation and providing water demand in 6 dams. On the Karun River, 5 dams have been built in the series arrangements, and the Dez dam has been built parallel to those 5 dams. One of the main achievements in this research is the implementation of the structure of production of hydroelectric energy as a function of matrix in MATLAB software. The results show that the role of objective function structure for generating hydroelectric energy in weighting method algorithm is more important than water supply. Nonetheless by implementing ε- constraint method algorithm, we can both increase hydroelectric power generation and supply around 85% of agricultural and industrial demands. PMID:27248152

  3. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    PubMed

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  4. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  5. Mission Design and Optimal Asteroid Deflection for Planetary Defense

    NASA Technical Reports Server (NTRS)

    Sarli, Bruno V.; Knittel, Jeremy M.; Englander, Jacob A.; Barbee, Brent W.

    2017-01-01

    Planetary defense is a topic of increasing interest for many reasons, which has been mentioned in "Vision and Voyages for Planetary Science in the Decade 2013-2022''. However, perhaps one of the most significant rationales for asteroid studies is the number of close approaches that have been documented recently. A space mission with a planetary defense objective aims to deflect the threatening body as far as possible from Earth. The design of a mission that optimally deflects an asteroid has different challenges: speed, precision, and system trade-off. This work addresses such issues and develops a fast transcription of the problem that can be implemented into an optimization tool, which allows for a broader trade study of different mission concepts with a medium fidelity. Such work is suitable for a mission?s preliminary study. It is shown, using the fictitious asteroid impact scenario 2017 PDC, that the complete tool is able to account for the orbit sensitivity to small perturbations and quickly optimize a deflection trajectory. The speed in which the tool operates allows for a trade study between the available hardware. As a result, key deflection dates and mission strategies are identified for the 2017 PDC.

  6. Mission Design and Optimal Asteroid Deflection for Planetary Defense

    NASA Technical Reports Server (NTRS)

    Sarli, Bruno V.; Knittel, Jeremy M.; Englander, Jacob A.; Barbee, Brent W.

    2017-01-01

    Planetary defense is a topic of increasing interest for many reasons, which has been mentioned in "Vision and Voyages for Planetary Science in the Decade 2013-2022". However, perhaps one of the most significant rationales for asteroid studies is the number of close approaches that have been documented recently. A space mission with a planetary defense objective aims to deflect the threatening body as far as possible from Earth. The design of a mission that optimally deflects an asteroid has different challenges: speed, precision, and system trade-off. This work addresses such issues and develops a fast transcription of the problem that can be implemented into an optimization tool, which allows for a broader trade study of different mission concepts with a medium fidelity. Such work is suitable for a mission's preliminary study. It is shown, using the fictitious asteroid impact scenario 2017 PDC, that the complete tool is able to account for the orbit sensitivity to small perturbations and quickly optimize a deflection trajectory. The speed in which the tool operates allows for a trade study between the available hardware. As a result, key deflection dates and mission strategies are identified for the 2017 PDC.

  7. Free-form Airfoil Shape Optimization Under Uncertainty Using Maximum Expected Value and Second-order Second-moment Strategies

    NASA Technical Reports Server (NTRS)

    Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.

  8. Optimal waist circumference cutoff value for defining the metabolic syndrome in postmenopausal Latin American women.

    PubMed

    Blümel, Juan E; Legorreta, Deborah; Chedraui, Peter; Ayala, Felix; Bencosme, Ascanio; Danckers, Luis; Lange, Diego; Espinoza, Maria T; Gomez, Gustavo; Grandia, Elena; Izaguirre, Humberto; Manriquez, Valentin; Martino, Mabel; Navarro, Daysi; Ojeda, Eliana; Onatra, William; Pozzo, Estela; Prada, Mariela; Royer, Monique; Saavedra, Javier M; Sayegh, Fabiana; Tserotas, Konstantinos; Vallejo, Maria S; Zuñiga, Cristina

    2012-04-01

    The aim of this study was to determine an optimal waist circumference (WC) cutoff value for defining the metabolic syndrome (METS) in postmenopausal Latin American women. A total of 3,965 postmenopausal women (age, 45-64 y), with self-reported good health, attending routine consultation at 12 gynecological centers in major Latin American cities were included in this cross-sectional study. Modified guidelines of the US National Cholesterol Education Program, Adult Treatment Panel III were used to assess METS risk factors. Receiver operator characteristic curve analysis was used to obtain an optimal WC cutoff value best predicting at least two other METS components. Optimal cutoff values were calculated by plotting the true-positive rate (sensitivity) against the false-positive rate (1 - specificity). In addition, total accuracy, distance to receiver operator characteristic curve, and the Youden Index were calculated. Of the participants, 51.6% (n = 2,047) were identified as having two or more nonadipose METS risk components (excluding a positive WC component). These women were older, had more years since menopause onset, used hormone therapy less frequently, and had higher body mass indices than women with fewer metabolic risk factors. The optimal WC cutoff value best predicting at least two other METS components was determined to be 88 cm, equal to that defined by the Adult Treatment Panel III. A WC cutoff value of 88 cm is optimal for defining METS in this postmenopausal Latin American series.

  9. Determining the optimal approach to identifying individuals with chronic obstructive pulmonary disease: The DOC study.

    PubMed

    Ronaldson, Sarah J; Dyson, Lisa; Clark, Laura; Hewitt, Catherine E; Torgerson, David J; Cooper, Brendan G; Kearney, Matt; Laughey, William; Raghunath, Raghu; Steele, Lisa; Rhodes, Rebecca; Adamson, Joy

    2018-06-01

    Early identification of chronic obstructive pulmonary disease (COPD) results in patients receiving appropriate management for their condition at an earlier stage in their disease. The determining the optimal approach to identifying individuals with chronic obstructive pulmonary disease (DOC) study was a case-finding study to enhance early identification of COPD in primary care, which evaluated the diagnostic accuracy of a series of simple lung function tests and symptom-based case-finding questionnaires. Current smokers aged 35 or more were invited to undertake a series of case-finding tools, which comprised lung function tests (specifically, spirometry, microspirometry, peak flow meter, and WheezoMeter) and several case-finding questionnaires. The effectiveness of these tests, individually or in combination, to identify small airways obstruction was evaluated against the gold standard of spirometry, with the quality of spirometry tests assessed by independent overreaders. The study was conducted with general practices in the Yorkshire and Humberside area, in the UK. Six hundred eighty-one individuals met the inclusion criteria, with 444 participants completing their study appointments. A total of 216 (49%) with good-quality spirometry readings were included in the analysis. The most effective case-finding tools were found to be the peak flow meter alone, the peak flow meter plus WheezoMeter, and microspirometry alone. In addition to the main analysis, where the severity of airflow obstruction was based on fixed ratios and percent of predicted values, sensitivity analyses were conducted by using lower limit of normal values. This research informs the choice of test for COPD identification; case-finding by use of the peak flow meter or microspirometer could be used routinely in primary care for suspected COPD patients. Only those testing positive to these tests would move on to full spirometry, thereby reducing unnecessary spirometric testing. © 2018 John Wiley

  10. A rapid application of GA-MODFLOW combined approach to optimization of well placement and operation for drought-ready groundwater reservoir design

    NASA Astrophysics Data System (ADS)

    Park, C.; Kim, Y.; Jang, H.

    2016-12-01

    Poor temporal distribution of precipitation increases winter drought risks in mountain valley areas in Korea. Since perennial streams or reservoirs for water use are rare in the areas, groundwater is usually a major water resource. Significant amount of the precipitation contributing groundwater recharge mostly occurs during the summer season. However, a volume of groundwater recharge is limited by rapid runoff because of the topographic characteristics such as steep hill and slope. A groundwater reservoir using artificial recharge method with rain water reuse can be a suitable solution to secure water resource for the mountain valley areas. Successful groundwater reservoir design depends on optimization of well placement and operation. This study introduces a combined approach using GA (Genetic Algorithm) and MODFLOW and its rapid application. The methodology is based on RAD (Rapid Application Development) concept in order to minimize the cost of implementation. DEAP (Distributed Evolutionary Algorithms in Python), a framework for prototyping and testing evolutionary algorithms, is applied for quick code development and CUDA (Compute Unified Device Architecture), a parallel computing platform using GPU (Graphics Processing Unit), is introduced to reduce runtime. The application was successfully applied to Samdeok-ri, Gosung, Korea. The site is located in a mountain valley area and unconfined aquifers are major source of water use. The results of the application produced the best location and optimized operation schedule of wells including pumping and injecting.

  11. Optimizing latency in Xilinx FPGA implementations of the GBT

    NASA Astrophysics Data System (ADS)

    Muschter, S.; Baron, S.; Bohm, C.; Cachemiche, J.-P.; Soos, C.

    2010-12-01

    The GigaBit Transceiver (GBT) [1] system has been developed to replace the Timing, Trigger and Control (TTC) system [2], currently used by LHC, as well as to provide data transmission between on-detector and off-detector components in future sLHC detectors. A VHDL version of the GBT-SERDES, designed for FPGAs, was released in March 2010 as a GBT-FPGA Starter Kit for future GBT users and for off-detector GBT implementation [3]. This code was optimized for resource utilization [4], as the GBT protocol is very demanding. It was not, however, optimized for latency — which will be a critical parameter when used in the trigger path. The GBT-FPGA Starter Kit firmware was first analyzed in terms of latency by looking at the separate components of the VHDL version. Once the parts which contribute most to the latency were identified and modified, two possible optimizations were chosen, resulting in a latency reduced by a factor of three. The modifications were also analyzed in terms of logic utilization. The latency optimization results were compared with measurement results from a Virtex 6 ML605 development board [5] equipped with a XC6VLX240T with speedgrade-1 and the package FF1156. Bit error rate tests were also performed to ensure an error free operation. The two final optimizations were analyzed for utilization and compared with the original code, distributed in the Starter Kit.

  12. Optimal control of hydroelectric facilities

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi

    This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the

  13. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System.

    PubMed

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-11-16

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  14. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    PubMed Central

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-01-01

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342

  15. Identifying optimal agricultural countermeasure strategies for a hypothetical contamination scenario using the strategy model.

    PubMed

    Cox, G; Beresford, N A; Alvarez-Farizo, B; Oughton, D; Kis, Z; Eged, K; Thørring, H; Hunt, J; Wright, S; Barnett, C L; Gil, J M; Howard, B J; Crout, N M J

    2005-01-01

    A spatially implemented model designed to assist the identification of optimal countermeasure strategies for radioactively contaminated regions is described. Collective and individual ingestion doses for people within the affected area are estimated together with collective exported ingestion dose. A range of countermeasures are incorporated within the model, and environmental restrictions have been included as appropriate. The model evaluates the effectiveness of a given combination of countermeasures through a cost function which balances the benefit obtained through the reduction in dose with the cost of implementation. The optimal countermeasure strategy is the combination of individual countermeasures (and when and where they are implemented) which gives the lowest value of the cost function. The model outputs should not be considered as definitive solutions, rather as interactive inputs to the decision making process. As a demonstration the model has been applied to a hypothetical scenario in Cumbria (UK). This scenario considered a published nuclear power plant accident scenario with a total deposition of 1.7x10(14), 1.2x10(13), 2.8x10(10) and 5.3x10(9)Bq for Cs-137, Sr-90, Pu-239/240 and Am-241, respectively. The model predicts that if no remediation measures were implemented the resulting collective dose would be approximately 36 000 person-Sv (predominantly from 137Cs) over a 10-year period post-deposition. The optimal countermeasure strategy is predicted to avert approximately 33 000 person-Sv at a cost of approximately 160 million pounds. The optimal strategy comprises a mixture of ploughing, AFCF (ammonium-ferric hexacyano-ferrate) administration, potassium fertiliser application, clean feeding of livestock and food restrictions. The model recommends specific areas within the contaminated area and time periods where these measures should be implemented.

  16. A method for identifying color vision deficiency malingering.

    PubMed

    Pouw, Andrew; Karanjia, Rustum; Sadun, Alfredo

    2017-03-01

    To propose a new test to identify color vision deficiency malingering. An online survey was distributed to 130 truly color vision deficient participants and 160 participants willing to simulate color vision deficiency. The survey contained three sets of six color-adjusted versions of the standard Ishihara color plates each, as well as one set of six control plates. The plates that best discriminated both participant groups were selected for a "balanced" test emphasizing both sensitivity and specificity. A "specific" test that prioritized high specificity was also created by selecting from these plates. Statistical measures of the test (sensitivity, specificity, and Youden index) were assessed at each possible cut-off threshold, and a receiver operating characteristic (ROC) function with its area under the curve (AUC) charted. The redshift plate set was identified as having the highest difference of means between groups (-58%, CI: -64 to -52%), as well as the widest gap between group modes. Statistical measures of the "balanced" test show an optimal cut-off of at least two incorrectly identified plates to suggest malingering (Youden index: 0.773, sensitivity: 83.3%, specificity: 94.0%, AUC of ROC 0.918). The "specific" test was able to identify color vision deficiency simulators with a specificity of 100% when using a cut-off of at least two incorrectly identified plates (Youden index 0.599, sensitivity 59.9%, specificity 100%, AUC of ROC 0.881). Our proposed test for identifying color vision deficiency malingering demonstrates a high degree of reliability with AUCs of 0.918 and 0.881 for the "balanced" and "specific" tests, respectively. A cut-off threshold of at least two missed plates on the "specific" test was able to identify color vision deficiency simulators with 100% specificity.

  17. Rethinking key–value store for parallel I/O optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He

    2015-01-26

    Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less

  18. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    PubMed

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  19. On optimal infinite impulse response edge detection filters

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1991-01-01

    The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.

  20. An Optimization-Based Approach to Determine Requirements and Aircraft Design under Multi-domain Uncertainties

    NASA Astrophysics Data System (ADS)

    Govindaraju, Parithi

    Determining the optimal requirements for and design variable values of new systems, which operate along with existing systems to provide a set of overarching capabilities, as a single task is challenging due to the highly interconnected effects that setting requirements on a new system's design can have on how an operator uses this newly designed system. This task of determining the requirements and the design variable values becomes even more difficult because of the presence of uncertainties in the new system design and in the operational environment. This research proposed and investigated aspects of a framework that generates optimum design requirements of new, yet-to-be-designed systems that, when operating alongside other systems, will optimize fleet-level objectives while considering the effects of various uncertainties. Specifically, this research effort addresses the issues of uncertainty in the design of the new system through reliability-based design optimization methods, and uncertainty in the operations of the fleet through descriptive sampling methods and robust optimization formulations. In this context, fleet-level performance metrics result from using the new system alongside other systems to accomplish an overarching objective or mission. This approach treats the design requirements of a new system as decision variables in an optimization problem formulation that a user in the position of making an acquisition decision could solve. This solution would indicate the best new system requirements-and an associated description of the best possible design variable variables for that new system-to optimize the fleet level performance metric(s). Using a problem motivated by recorded operations of the United States Air Force Air Mobility Command for illustration, the approach is demonstrated first for a simplified problem that only considers demand uncertainties in the service network and the proposed methodology is used to identify the optimal design