Sample records for cost flow problem

  1. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    PubMed

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-06-01

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. On the utility of GPU accelerated high-order methods for unsteady flow simulations: A comparison with industry-standard tools

    NASA Astrophysics Data System (ADS)

    Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.

    2017-04-01

    First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.

  3. On the utility of GPU accelerated high-order methods for unsteady flow simulations: A comparison with industry-standard tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeire, B.C., E-mail: brian.vermeire@concordia.ca; Witherden, F.D.; Vincent, P.E.

    First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier–Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to amore » range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor–Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.« less

  4. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  5. Active control of panel vibrations induced by boundary-layer flow

    NASA Technical Reports Server (NTRS)

    Chow, Pao-Liu

    1991-01-01

    Some problems in active control of panel vibration excited by a boundary layer flow over a flat plate are studied. In the first phase of the study, the optimal control problem of vibrating elastic panel induced by a fluid dynamical loading was studied. For a simply supported rectangular plate, the vibration control problem can be analyzed by a modal analysis. The control objective is to minimize the total cost functional, which is the sum of a vibrational energy and the control cost. By means of the modal expansion, the dynamical equation for the plate and the cost functional are reduced to a system of ordinary differential equations and the cost functions for the modes. For the linear elastic plate, the modes become uncoupled. The control of each modal amplitude reduces to the so-called linear regulator problem in control theory. Such problems can then be solved by the method of adjoint state. The optimality system of equations was solved numerically by a shooting method. The results are summarized.

  6. A class of solution-invariant transformations of cost functions for minimum cost flow phase unwrapping.

    PubMed

    Hubig, Michael; Suchandt, Steffen; Adam, Nico

    2004-10-01

    Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.

  7. [Research progress of ecosystem service flow.

    PubMed

    Liu, Hui Min; Fan, Yu Long; Ding, Sheng Yan

    2016-07-01

    With the development of social economy, human disturbance has resulted in a variety of ecosystem service degradation or disappearance. Ecosystem services flow plays an important role in delivery, transformation and maintenance of ecosystem services, and becomes one of the new research directions. In this paper, based on the classification of ecosystem services flow, we analyzed ecosystem service delivery carrier, and investigated the mechanism of ecosystem service flow, including the information, property, scale features, quantification and cartography. Moreover, a tentative analysis on cost-effective of ecosystem services flow (such as transportation costs, conversion costs, usage costs and cost of relativity) was made to analyze the consumption cost in ecosystem services flow process. It aimed to analyze dissipation cost in ecosystem services flow process. To a certain extent, the study of ecosystem service flow solved the problem of "double counting" in ecosystem services valuation, which could make a contribution for the sake of recognizing hot supply and consumption spots of ecosystem services. In addition, it would be conducive to maximizing the ecosystem service benefits in the transmission process and putting forward scientific and reasonable ecological compensation.

  8. A network flow model for load balancing in circuit-switched multicomputers

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1990-01-01

    In multicomputers that utilize circuit switching or wormhole routing, communication overhead depends largely on link contention - the variation due to distance between nodes is negligible. This has a major impact on the load balancing problem. In this case, there are some nodes with excess load (sources) and others with deficit load (sinks) and it is required to find a matching of sources to sinks that avoids contention. The problem is made complex by the hardwired routing on currently available machines: the user can control only which nodes communicate but not how the messages are routed. Network flow models of message flow in the mesh and the hypercube were developed to solve this problem. The crucial property of these models is the correspondence between minimum cost flows and correctly routed messages. To solve a given load balancing problem, a minimum cost flow algorithm is applied to the network. This permits one to determine efficiently a maximum contention free matching of sources to sinks which, in turn, tells one how much of the given imbalance can be eliminated without contention.

  9. Routing Algorithm based on Minimum Spanning Tree and Minimum Cost Flow for Hybrid Wireless-optical Broadband Access Network

    NASA Astrophysics Data System (ADS)

    Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen

    2012-03-01

    In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.

  10. Nonlinear relative-proportion-based route adjustment process for day-to-day traffic dynamics: modeling, equilibrium and stability analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wenlong; Ma, Shoufeng; Tian, Junfang; Li, Geng

    2016-11-01

    Travelers' route adjustment behaviors in a congested road traffic network are acknowledged as a dynamic game process between them. Existing Proportional-Switch Adjustment Process (PSAP) models have been extensively investigated to characterize travelers' route choice behaviors; PSAP has concise structure and intuitive behavior rule. Unfortunately most of which have some limitations, i.e., the flow over adjustment problem for the discrete PSAP model, the absolute cost differences route adjustment problem, etc. This paper proposes a relative-Proportion-based Route Adjustment Process (rePRAP) maintains the advantages of PSAP and overcomes these limitations. The rePRAP describes the situation that travelers on higher cost route switch to those with lower cost at the rate that is unilaterally depended on the relative cost differences between higher cost route and its alternatives. It is verified to be consistent with the principle of the rational behavior adjustment process. The equivalence among user equilibrium, stationary path flow pattern and stationary link flow pattern is established, which can be applied to judge whether a given network traffic flow has reached UE or not by detecting the stationary or non-stationary state of link flow pattern. The stability theorem is proved by the Lyapunov function approach. A simple example is tested to demonstrate the effectiveness of the rePRAP model.

  11. Simulation of OSCM Concepts for HQ SACT

    DTIC Science & Technology

    2007-06-01

    effective method for creating understanding, identifying problems and developing solutions. • Simulation of a goal driven organization is a cost...effective method to visualize some aspects of the problem space Toolbox • The team used Extend™, a COTS product from Imagine That!® (http...Nations flow Model OSCM ATARES flow Batching A/C & Pallets Model ISAF Airbridge flow Flying and unbatching A/C Fleet Create resources Calculate flight

  12. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  13. Design for Warehouse with Product Flow Type Allocation using Linear Programming: A Case Study in a Textile Industry

    NASA Astrophysics Data System (ADS)

    Khannan, M. S. A.; Nafisah, L.; Palupi, D. L.

    2018-03-01

    Sari Warna Co. Ltd, a company engaged in the textile industry, is experiencing problems in the allocation and placement of goods in the warehouse. During this time the company has not implemented the product flow type allocation and product placement to the respective products resulting in a high total material handling cost. Therefore, this study aimed to determine the allocation and placement of goods in the warehouse corresponding to product flow type with minimal total material handling cost. This research is a quantitative research based on the theory of storage and warehouse that uses a mathematical model of optimization problem solving using mathematical optimization model approach belongs to Heragu (2005), aided by software LINGO 11.0 in the calculation of the optimization model. Results obtained from this study is the proportion of the distribution for each functional area is the area of cross-docking at 0.0734, the reserve area at 0.1894, and the forward area at 0.7372. The allocation of product flow type 1 is 5 products, the product flow type 2 is 9 products, the product flow type 3 is 2 products, and the product flow type 4 is 6 products. The optimal total material handling cost by using this mathematical model equal to Rp43.079.510 while it is equal to Rp 49.869.728 by using the company’s existing method. It saves Rp6.790.218 for the total material handling cost. Thus, all of the products can be allocated in accordance with the product flow type with minimal total material handling cost.

  14. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  15. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  16. Parameter Optimization for Turbulent Reacting Flows Using Adjoints

    NASA Astrophysics Data System (ADS)

    Lapointe, Caelan; Hamlington, Peter E.

    2017-11-01

    The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.

  17. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  18. A novel hybrid genetic algorithm to solve the make-to-order sequence-dependent flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.

    2014-04-01

    Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

  19. Re-Innovating Recycling for Turbulent Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    Ruan, Joseph; Blanquart, Guillaume

    2017-11-01

    Historically, turbulent boundary layers along a flat plate have been expensive to simulate numerically, in part due to the difficulty of initializing the inflow with ``realistic'' turbulence, but also due to boundary layer growth. The former has been resolved in several ways, primarily dedicating a region of at least 10 boundary layer thicknesses in width to rescale and recycle flow or by extending the region far enough downstream to allow a laminar flow to develop into turbulence. Both of these methods are relatively costly. We propose a new method to remove the need for an inflow region, thus reducing computational costs significantly. Leveraging the scale similarity of the mean flow profiles, we introduce a coordinate transformation so that the boundary layer problem can be solved as a parallel flow problem with additional source terms. The solutions in the new coordinate system are statistically homogeneous in the downstream direction and so the problem can be solved with periodic boundary conditions. The present study shows the stability of this method, its implementation and its validation for a few laminar and turbulent boundary layer cases.

  20. Simulation Model for Scenario Optimization of the Ready-Mix Concrete Delivery Problem

    NASA Astrophysics Data System (ADS)

    Galić, Mario; Kraus, Ivan

    2016-12-01

    This paper introduces a discrete simulation model for solving routing and network material flow problems in construction projects. Before the description of the model a detailed literature review is provided. The model is verified using a case study of solving the ready-mix concrete network flow and routing problem in metropolitan area in Croatia. Within this study real-time input parameters were taken into account. Simulation model is structured in Enterprise Dynamics simulation software and Microsoft Excel linked with Google Maps. The model is dynamic, easily managed and adjustable, but also provides good estimation for minimization of costs and realization time in solving discrete routing and material network flow problems.

  1. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  2. Vortex Design Problem

    NASA Astrophysics Data System (ADS)

    Protas, Bartosz

    2007-11-01

    In this investigation we are concerned with a family of solutions of the 2D steady--state Euler equations, known as the Prandtl--Batchelor flows, which are characterized by the presence of finite--area vortex patches embedded in an irrotational flow. We are interested in flows in the exterior of a circular cylinder and with a uniform stream at infinity, since such flows are often employed as models of bluff body wakes in the high--Reynolds number limit. The ``vortex design'' problem we consider consists in determining a distribution of the wall--normal velocity on parts of the cylinder boundary such that the vortex patches modelling the wake vortices will have a prescribed shape and location. Such inverse problem have applications in various areas of flow control, such as mitigation of the wake hazard. We show how this problem can be solved computationally by formulating it as a free--boundary optimization problem. In particular, we demonstrate that derivation of the adjoint system, required to compute the cost functional gradient, is facilitated by application of the shape differential calculus. Finally, solutions of the vortex design problem are illustrated with computational examples.

  3. An alternative arrangement of metered dosing fluid using centrifugal pump

    NASA Astrophysics Data System (ADS)

    Islam, Md. Arafat; Ehsan, Md.

    2017-06-01

    Positive displacement dosing pumps are extensively used in various types of process industries. They are widely used for metering small flow rates of a dosing fluid into a main flow. High head and low controllable flow rates make these pumps suitable for industrial flow metering applications. However their pulsating flow is not very suitable for proper mixing of fluids and they are relatively more expensive to buy and maintain. Considering such problems, alternative techniques to control the fluid flow from a low cost centrifugal pump is practiced. These include - throttling, variable speed drive, impeller geometry control and bypass control. Variable speed drive and impeller geometry control are comparatively costly and the flow control by throttling is not an energy efficient process. In this study an arrangement of metered dosing flow was developed using a typical low cost centrifugal pump using bypass flow technique. Using bypass flow control technique a wide range of metered dosing flows under a range of heads were attained using fixed pump geometry and drive speed. The bulk flow returning from the system into the main tank ensures better mixing which may eliminate the need of separate agitators. Comparative performance study was made between the bypass flow control arrangement of centrifugal pump and a diaphragm type dosing pump. Similar heads and flow rates were attainable using the bypass control system compared to the diaphragm dosing pump, but using relatively more energy. Geometrical optimization of the centrifugal pump impeller was further carried out to make the bypass flow arrangement more energy efficient. Although both the systems run at low overall efficiencies but the capital cost could be reduced by about 87% compared to the dosing pump. The savings in capital investment and lower maintenance cost very significantly exceeds the relatively higher energy cost of the bypass system. This technique can be used as a cost effective solution for industries in Bangladesh and have been implemented in two salt iodization plants at Narayangang.

  4. Minding Your Business: How to Avoid the Seven Deadly Financial Pitfalls.

    ERIC Educational Resources Information Center

    Stephens, Keith

    1990-01-01

    Describes financial management problems typically encountered by child care center directors and owners. Offers suggestions for planning and management techniques to overcome problems of cash flow, budgeting, rising costs, underpricing, declining revenues, fee collection, and liquidity. (NH)

  5. Single machine scheduling with slack due dates assignment

    NASA Astrophysics Data System (ADS)

    Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin

    2017-04-01

    This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.

  6. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  7. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  8. Determining the Optimal Solution for Quadratically Constrained Quadratic Programming (QCQP) on Energy-Saving Generation Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Chaerani, D.; Khansa, H. N.

    2018-03-01

    Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method

  9. A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Rosu, Serban

    2011-09-01

    In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.

  10. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  11. Feedback control for unsteady flow and its application to the stochastic Burgers equation

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon; Temam, Roger; Moin, Parviz; Kim, John

    1993-01-01

    The study applies mathematical methods of control theory to the problem of control of fluid flow with the long-range objective of developing effective methods for the control of turbulent flows. Model problems are employed through the formalism and language of control theory to present the procedure of how to cast the problem of controlling turbulence into a problem in optimal control theory. Methods of calculus of variations through the adjoint state and gradient algorithms are used to present a suboptimal control and feedback procedure for stationary and time-dependent problems. Two types of controls are investigated: distributed and boundary controls. Several cases of both controls are numerically simulated to investigate the performances of the control algorithm. Most cases considered show significant reductions of the costs to be minimized. The dependence of the control algorithm on the time-descretization method is discussed.

  12. ICASE Semiannual Report 1 October 1991 - 31 March 1992

    DTIC Science & Technology

    1992-05-01

    who have resident appointments for limited periods of time as well as by visiting and resident consultants. Members of NASA’s research staff may also...performed showing that the full optimization problem can be solved with a computational cost which is only a few times more than that of solving the PDE...The goal is to obtain a solution of the optimization problem in a computational cost which is just a few times (2-3) that of the flow solver. Such a

  13. Simulation of a 3D Turbulent Wavy Channel based on the High-order WENO Scheme

    NASA Astrophysics Data System (ADS)

    Tsai, Bor-Jang; Chou, Chung-Chyi; Tsai, Yeong-Pei; Chuang, Ying Hung

    2018-02-01

    Passive interest turbulent drag reduction, effective means to improve air vehicle fuel consumption costs. Most turbulent problems happening to the nature and engineering applications were exactly the turbulence problem frequently caused by one or more turbulent shear flows. This study was operated with incompressible 3-D channels with cyclic wavy boundary to explore the physical properties of turbulence flow. This research measures the distribution of average velocity, instant flowing field shapes, turbulence and pressure distribution, etc. Furthermore, the systematic computation and analysis for the 3-D flow field was also implemented. It was aimed to clearly understand the turbulence fields formed by wavy boundary of tube flow. The purpose of this research is to obtain systematic structural information about the turbulent flow field and features of the turbulence structure are discussed.

  14. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  15. Software Process Improvement through the Removal of Project-Level Knowledge Flow Obstacles: The Perceptions of Software Engineers

    ERIC Educational Resources Information Center

    Mitchell, Susan Marie

    2012-01-01

    Uncontrollable costs, schedule overruns, and poor end product quality continue to plague the software engineering field. Innovations formulated with the expectation to minimize or eliminate cost, schedule, and quality problems have generally fallen into one of three categories: programming paradigms, software tools, and software process…

  16. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  17. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  18. Operational flow visualization techniques in the Langley Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Corlett, W. A.

    1982-01-01

    The unitary plan wind tunnel (UPWT) uses in daily operation are shown. New ideas for improving the quality of established flow visualization methods are developed and programs on promising new flow visualization techniques are pursued. The unitary plan wind tunnel is a supersonic facility, referred to as a production facility, although the majority of tests are inhouse basic research investigations. The facility has two 4 ft. by 4 ft. test sections which span a Mach range from 1.5 to 4.6. The cost of operation is about $10 per minute. Problems are the time required for a flow visualization test setup and investigation costs and the ability to obtain consistently repeatable results. Examples of sublimation, vapor screen, oil flow, minitufts, schlieren, and shadowgraphs taken in UPWT are presented. All tests in UPWT employ one or more of the flow visualization techniques.

  19. A 3-D turbulent flow analysis using finite elements with k-ɛ model

    NASA Astrophysics Data System (ADS)

    Okuda, H.; Yagawa, G.; Eguchi, Y.

    1989-03-01

    This paper describes the finite element turbulent flow analysis, which is suitable for three-dimensional large scale problems. The k-ɛ turbulence model as well as the conservation equations of mass and momentum are discretized in space using rather low order elements. Resulting coefficient matrices are evaluated by one-point quadrature in order to reduce the computational storage and the CPU cost. The time integration scheme based on the velocity correction method is employed to obtain steady state solutions. For the verification of this FEM program, two-dimensional plenum flow is simulated and compared with experiment. As the application to three-dimensional practical problems, the turbulent flows in the upper plenum of the fast breeder reactor are calculated for various boundary conditions.

  20. Sperry Low Temperature Geothermal Conversion System, Phase 1 and Phase 2. Volume 3: Systems description

    NASA Astrophysics Data System (ADS)

    Matthews, H. B.

    The major fraction of hydrothermal resources with the prospect of economic usefulness for the generation of electricity are in the 300(0)F to 425(0)F temperature range. Cost effective conversion of the geothermal energy to electricity requires new ideas to improve conversion efficiency, enhance brine flow, reduce plant costs, increase plant availability, and shorten the time between investment and return. The problems addressed are those inherent in the geothermal environment, in the binary fluid cycle, in the difficulty of efficiently converting the energy of a low temperature resource, and in geothermal economics some of these problems are explained. The energy expended by the down hole pump; the difficulty in designing reliable down hole equipment; fouling of heat exchanger surfaces by geothermal fluids; the unavailability of condenser cooling water at most geothermal sites; the large portion of the available energy used by the feed pump in a binary system; the pinch effect, a loss in available energy in transferring heat from water to an organic fluid; flow losses in fluids that carry only a small amount of useful energy to begin with; high heat exchanger costs, the lower the temperature interval of the cycle, the higher the heat exchanger costs in $/kW; the complexity and cost of the many auxiliary elements of proposed geothermal plants; and the unfortunate cash flow vs. investment curve caused by the many years of investment required to bring a field into production before any income is realized.

  1. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  2. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  3. DEP : a computer program for evaluating lumber drying costs and investments

    Treesearch

    Stewart Holmes; George B. Harpole; Edward Bilek

    1983-01-01

    The DEP computer program is a modified discounted cash flow computer program designed for analysis of problems involving economic analysis of wood drying processes. Wood drying processes are different from other processes because of the large amounts of working capital required to finance inventories, and because of relatively large shares of costs charged to inventory...

  4. Steady flow model user's guide

    NASA Astrophysics Data System (ADS)

    Doughty, C.; Hellstrom, G.; Tsang, C. F.; Claesson, J.

    1984-07-01

    Sophisticated numerical models that solve the coupled mass and energy transport equations for nonisothermal fluid flow in a porous medium were used to match analytical results and field data for aquifer thermal energy storage (ATES) systems. As an alternative to the ATES problem the Steady Flow Model (SFM), a simplified but fast numerical model was developed. A steady purely radial flow field is prescribed in the aquifer, and incorporated into the heat transport equation which is then solved numerically. While the radial flow assumption limits the range of ATES systems that can be studied using the SFM, it greatly simplifies use of this code. The preparation of input is quite simple compared to that for a sophisticated coupled mass and energy model, and the cost of running the SFM is far cheaper. The simple flow field allows use of a special calculational mesh that eliminates the numerical dispersion usually associated with the numerical solution of convection problems. The problem is defined, the algorithm used to solve it are outllined, and the input and output for the SFM is described.

  5. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  6. Adaptive LES Methodology for Turbulent Flow Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleg V. Vasilyev

    2008-06-12

    Although turbulent flows are common in the world around us, a solution to the fundamental equations that govern turbulence still eludes the scientific community. Turbulence has often been called one of the last unsolved problem in classical physics, yet it is clear that the need to accurately predict the effect of turbulent flows impacts virtually every field of science and engineering. As an example, a critical step in making modern computational tools useful in designing aircraft is to be able to accurately predict the lift, drag, and other aerodynamic characteristics in numerical simulations in a reasonable amount of time. Simulationsmore » that take months to years to complete are much less useful to the design cycle. Much work has been done toward this goal (Lee-Rausch et al. 2003, Jameson 2003) and as cost effective accurate tools for simulating turbulent flows evolve, we will all benefit from new scientific and engineering breakthroughs. The problem of simulating high Reynolds number (Re) turbulent flows of engineering and scientific interest would have been solved with the advent of Direct Numerical Simulation (DNS) techniques if unlimited computing power, memory, and time could be applied to each particular problem. Yet, given the current and near future computational resources that exist and a reasonable limit on the amount of time an engineer or scientist can wait for a result, the DNS technique will not be useful for more than 'unit' problems for the foreseeable future (Moin & Kim 1997, Jimenez & Moin 1991). The high computational cost for the DNS of three dimensional turbulent flows results from the fact that they have eddies of significant energy in a range of scales from the characteristic length scale of the flow all the way down to the Kolmogorov length scale. The actual cost of doing a three dimensional DNS scales as Re{sup 9/4} due to the large disparity in scales that need to be fully resolved. State-of-the-art DNS calculations of isotropic turbulence have recently been completed at the Japanese Earth Simulator (Yokokawa et al. 2002, Kaneda et al. 2003) using a resolution of 40963 (approximately 10{sup 11}) grid points with a Taylor-scale Reynolds number of 1217 (Re {approx} 10{sup 6}). Impressive as these calculations are, performed on one of the world's fastest super computers, more brute computational power would be needed to simulate the flow over the fuselage of a commercial aircraft at cruising speed. Such a calculation would require on the order of 10{sup 16} grid points and would have a Reynolds number in the range of 108. Such a calculation would take several thousand years to simulate one minute of flight time on today's fastest super computers (Moin & Kim 1997). Even using state-of-the-art zonal approaches, which allow DNS calculations that resolve the necessary range of scales within predefined 'zones' in the flow domain, this calculation would take far too long for the result to be of engineering interest when it is finally obtained. Since computing power, memory, and time are all scarce resources, the problem of simulating turbulent flows has become one of how to abstract or simplify the complexity of the physics represented in the full Navier-Stokes (NS) equations in such a way that the 'important' physics of the problem is captured at a lower cost. To do this, a portion of the modes of the turbulent flow field needs to be approximated by a low order model that is cheaper than the full NS calculation. This model can then be used along with a numerical simulation of the 'important' modes of the problem that cannot be well represented by the model. The decision of what part of the physics to model and what kind of model to use has to be based on what physical properties are considered 'important' for the problem. It should be noted that 'nothing is free', so any use of a low order model will by definition lose some information about the original flow.« less

  7. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  8. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE PAGES

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...

    2016-07-13

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  9. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar

    2016-07-01

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.

  10. Contractor Productivity Measurement.

    DTIC Science & Technology

    1984-06-01

    Principles ( GAAP ) or Uniform Cost Accounting Standards (CAS) as detailed in Federal Acquisition Regulation (FAR) Part 30 and DOD FAR Supplement, Appendix 0...revaluation management input, investor input, taxes, depreciation , etc., are all called out and addressed. The treatment of potential problems such as...of 20 percent. Since many cash flow items require tracking of book value, depreciation and cost- reducing effects of the investment, these items are

  11. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  12. Dealing With Shallow-Water Flow in the Deepwater Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Ostermeier, R.

    2006-05-01

    Some of the Shell experience in dealing with the shallow-water flow problem in the Deepwater Gulf of Mexico (GOM) will be presented. The nature of the problem, including areal extent and over-pressuring mechanisms, will be discussed. Methods for sand prediction and shallow sediment and flow characterization will be reviewed. These include seismic techniques, the use of geo-technical wells, regional trends, and various MWD methods. Some examples of flow incidents with pertinent drilling issues, including well failures and abandonment, will be described. To address the shallow-water flow problem, Shell created a multi-disciplinary team of specialists in geology, geophysics, petrophysics, drilling, and civil engineering. The team developed several methodologies to deal with various aspects of the problem. These include regional trends and data bases, shallow seismic interpretation and sand prediction, well site and casing point selection, geo-technical well design and data interpretation, logging program design and interpretation, cementing design and fluids formulation, methods for remediation and mitigation of lost circulation, and so on. Shell's extensive Deepwater GOM drilling experience has lead to new understanding of the problem. Examples include delineation of trends in shallow water flow occurrence and severity, trends and departures in PP/FG, rock properties pertaining to seismic identification of sands, and so on. New knowledge has also been acquired through the use of geo-technical wells. One example is the observed rapid onset and growth of over-pressures below the mudline. Total trouble costs due to shallow water flow for all GOM operators almost certainly runs into the several hundred million dollars. Though the problem remains a concern, advances in our knowledge and understanding make it a problem that is manageable and not the "show stopper" once feared.

  13. Large-eddy simulation of a boundary layer with concave streamwise curvature

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.

    1994-01-01

    Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.

  14. Reactive power planning under high penetration of wind energy using Benders decomposition

    DOE PAGES

    Xu, Yan; Wei, Yanli; Fang, Xin; ...

    2015-11-05

    This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less

  15. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    NASA Astrophysics Data System (ADS)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  16. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  17. Fleet Sizing of Automated Material Handling Using Simulation Approach

    NASA Astrophysics Data System (ADS)

    Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny

    2018-03-01

    Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software

  18. Optimal partial mass transportation and obstacle Monge-Kantorovich equation

    NASA Astrophysics Data System (ADS)

    Igbida, Noureddine; Nguyen, Van Thanh

    2018-05-01

    Optimal partial mass transport, which is a variant of the optimal transport problem, consists in transporting effectively a prescribed amount of mass from a source to a target. The problem was first studied by Caffarelli and McCann (2010) [6] and Figalli (2010) [12] with a particular attention to the quadratic cost. Our aim here is to study the optimal partial mass transport problem with Finsler distance costs including the Monge cost given by the Euclidian distance. Our approach is different and our results do not follow from previous works. Among our results, we introduce a PDE of Monge-Kantorovich type with a double obstacle to characterize active submeasures, Kantorovich potential and optimal flow for the optimal partial transport problem. This new PDE enables us to study the uniqueness and monotonicity results for the active submeasures. Another interesting issue of our approach is its convenience for numerical analysis and computations that we develop in a separate paper [14] (Igbida and Nguyen, 2018).

  19. Airfoil optimization for unsteady flows with application to high-lift noise reduction

    NASA Astrophysics Data System (ADS)

    Rumpfkeil, Markus Peer

    The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far-field pressure fluctuations. Validation and application results for this novel hybrid URANS/FW-H optimization algorithm show that it is possible to optimize the shape of an airfoil in an unsteady flow environment to minimize its radiated far-field noise while maintaining good aerodynamic performance.

  20. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  1. Resolvent analysis of shear flows using One-Way Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Rigas, Georgios; Schmidt, Oliver; Towne, Aaron; Colonius, Tim

    2017-11-01

    For three-dimensional flows, questions of stability, receptivity, secondary flows, and coherent structures require the solution of large partial-derivative eigenvalue problems. Reduced-order approximations are thus required for engineering prediction since these problems are often computationally intractable or prohibitively expensive. For spatially slowly evolving flows, such as jets and boundary layers, the One-Way Navier-Stokes (OWNS) equations permit a fast spatial marching procedure that results in a huge reduction in computational cost. Here, an adjoint-based optimization framework is proposed and demonstrated for calculating optimal boundary conditions and optimal volumetric forcing. The corresponding optimal response modes are validated against modes obtained in terms of global resolvent analysis. For laminar base flows, the optimal modes reveal modal and non-modal transition mechanisms. For turbulent base flows, they predict the evolution of coherent structures in a statistical sense. Results from the application of the method to three-dimensional laminar wall-bounded flows and turbulent jets will be presented. This research was supported by the Office of Naval Research (N00014-16-1-2445) and Boeing Company (CT-BA-GTA-1).

  2. Stack Gas Scrubber Makes the Grade

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1975

    1975-01-01

    Describes a year long test of successful sulfur dioxide removal from stack gas with a calcium oxide slurry. Sludge disposal problems are discussed. Cost is estimated at 0.6 mill per kwh not including sludge removal. A flow diagram and equations are included. (GH)

  3. Existence of Optimal Controls for Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Doboszczak, Stefan; Mohan, Manil T.; Sritharan, Sivaguru S.

    2018-03-01

    We formulate a control problem for a distributed parameter system where the state is governed by the compressible Navier-Stokes equations. Introducing a suitable cost functional, the existence of an optimal control is established within the framework of strong solutions in three dimensions.

  4. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  5. An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.

  6. Bus-based park-and-ride system: a stochastic model on multimodal network with congestion pricing schemes

    NASA Astrophysics Data System (ADS)

    Liu, Zhiyuan; Meng, Qiang

    2014-05-01

    This paper focuses on modelling the network flow equilibrium problem on a multimodal transport network with bus-based park-and-ride (P&R) system and congestion pricing charges. The multimodal network has three travel modes: auto mode, transit mode and P&R mode. A continuously distributed value-of-time is assumed to convert toll charges and transit fares to time unit, and the users' route choice behaviour is assumed to follow the probit-based stochastic user equilibrium principle with elastic demand. These two assumptions have caused randomness to the users' generalised travel times on the multimodal network. A comprehensive network framework is first defined for the flow equilibrium problem with consideration of interactions between auto flows and transit (bus) flows. Then, a fixed-point model with unique solution is proposed for the equilibrium flows, which can be solved by a convergent cost averaging method. Finally, the proposed methodology is tested by a network example.

  7. The Development of Patient Scheduling Groups for an Effective Appointment System

    PubMed Central

    2016-01-01

    Summary Background Patient access to care and long wait times has been identified as major problems in outpatient delivery systems. These aspects impact medical staff productivity, service quality, clinic efficiency, and health-care cost. Objectives This study proposed to redesign existing patient types into scheduling groups so that the total cost of clinic flow and scheduling flexibility was minimized. The optimal scheduling group aimed to improve clinic efficiency and accessibility. Methods The proposed approach used the simulation optimization technique and was demonstrated in a Primary Care physician clinic. Patient type included, emergency/urgent care (ER/UC), follow-up (FU), new patient (NP), office visit (OV), physical exam (PE), and well child care (WCC). One scheduling group was designed for this physician. The approach steps were to collect physician treatment time data for each patient type, form the possible scheduling groups, simulate daily clinic flow and patient appointment requests, calculate costs of clinic flow as well as appointment flexibility, and find the scheduling group that minimized the total cost. Results The cost of clinic flow was minimized at the scheduling group of four, an 8.3% reduction from the group of one. The four groups were: 1. WCC, 2. OV, 3. FU and ER/UC, and 4. PE and NP. The cost of flexibility was always minimized at the group of one. The total cost was minimized at the group of two. WCC was considered separate and the others were grouped together. The total cost reduction was 1.3% from the group of one. Conclusions This study provided an alternative method of redesigning patient scheduling groups to address the impact on both clinic flow and appointment accessibility. Balance between them ensured the feasibility to the recognized issues of patient service and access to care. The robustness of the proposed method on the changes of clinic conditions was also discussed. PMID:27081406

  8. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  9. Effective Capital Provision Within Government. Methodologies for Right-Sizing Base Infrastructure

    DTIC Science & Technology

    2005-01-01

    unknown distributions, since they more accurately represent the complexity of real -world problems. Forecasting uncertain future demand flows is critical to...ordering system with no time lags and no additional costs for instantaneous delivery, shortage and holding costs would be eliminated, because the...order a fixed quantity, Q. 4.1.4 Analyzed Time Step Time is an important dimension in inventory models, since the way the system changes over time affects

  10. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  11. Tankless Water Heater

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Kennedy Space Center specialists aided Space, Energy, Time Saving (SETS) Systems, Inc. in working out the problems they encountered with their new electronic "tankless" water heater. The flow switch design suffered intermittent problems. Hiring several testing and engineering firms produced only graphs, printouts, and a large expense, but no solutions. Then through the Kennedy Space Center/State of Florida Technology Outreach Program, SETS was referred to Michael Brooks, a 21-year space program veteran and flowmeter expert. Run throughout Florida to provide technical service to businesses at no cost, the program applies scientific and engineering expertise originally developed for space applications to the Florida business community. Brooks discovered several key problems, resulting in a new design that turned out to be simpler, yielding a 63 percent reduction in labor and material costs over the old design.

  12. Membrane development for vanadium redox flow batteries.

    PubMed

    Schwenzer, Birgit; Zhang, Jianlu; Kim, Soowhan; Li, Liyu; Liu, Jun; Yang, Zhenguo

    2011-10-17

    Large-scale energy storage has become the main bottleneck for increasing the percentage of renewable energy in our electricity grids. Redox flow batteries are considered to be among the best options for electricity storage in the megawatt range and large demonstration systems have already been installed. Although the full technological potential of these systems has not been reached yet, currently the main problem hindering more widespread commercialization is the high cost of redox flow batteries. Nafion, as the preferred membrane material, is responsible for about 11% of the overall cost of a 1 MW/8 MWh system. Therefore, in recent years two main membrane related research threads have emerged: 1) chemical and physical modification of Nafion membranes to optimize their properties with regard to vanadium redox flow battery (VRFB) application; and 2) replacement of the Nafion membranes with different, less expensive materials. This review summarizes the underlying basic scientific issues associated with membrane use in VRFBs and presents an overview of membrane-related research approaches aimed at improving the efficiency of VRFBs and making the technology cost-competitive. Promising research strategies and materials are identified and suggestions are provided on how materials issues could be overcome.

  13. Models based on "out-of Kilter" algorithm

    NASA Astrophysics Data System (ADS)

    Adler, M. J.; Drobot, R.

    2012-04-01

    In case of many water users along the river stretches, it is very important, in case of low flows and droughty periods to develop an optimization model for water allocation, to cover all needs under certain predefined constraints, depending of the Contingency Plan for drought management. Such a program was developed during the implementation of the WATMAN Project, in Romania (WATMAN Project, 2005-2006, USTDA) for Arges-Dambovita-Ialomita Basins water transfers. This good practice was proposed for WATER CoRe Project- Good Practice Handbook for Drought Management, (InterregIVC, 2011), to be applied for the European Regions. Two types of simulation-optimization models based on an improved version of out-of-kilter algorithm as optimization technique have been developed and used in Romania: • models for founding of the short-term operation of a WMS, • models generically named SIMOPT that aim to the analysis of long-term WMS operation and have as the main results the statistical WMS functional parameters. A real WMS is modeled by an arcs-nodes network so the real WMS operation problem becomes a problem of flows in networks. The nodes and oriented arcs as well as their characteristics such as lower and upper limits and associated costs are the direct analog of the physical and operational WMS characteristics. Arcs represent both physical and conventional elements of WMS such as river branches, channels or pipes, water user demands or other water management requirements, trenches of water reservoirs volumes, water levels in channels or rivers, nodes are junctions of at least two arcs and stand for locations of lakes or water reservoirs and/or confluences of river branches, water withdrawal or wastewater discharge points, etc. Quantitative features of water resources, water users and water reservoirs or other water works are expressed as constraints of non-violating the lower and upper limits assigned on arcs. Options of WMS functioning i.e. water retention/discharge in/from the reservoirs or diversion of water from one part of WMS to the other in order to meet water demands as well as the water user economic benefit or loss related to the degree of water demand, are the defining elements of the objective function and are conventionally expressed by the means of costs attached to the arcs. The problem of optimizing the WMS operation is formulated like a flow in networks problem as following: to find the flow that minimize the cost in the whole network while meeting the constraints of continuity in nodes and the constraints of non-exceeding lower and upper flow limits on arcs. Conversion of WMS in the arcs-nodes network and the adequate choice of costs and limits on arcs are steps of a unitary process and depend on the goal of the respective model.

  14. Parallel Computation of Unsteady Flows on a Network of Workstations

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.

  15. Dynamic discrete tomography

    NASA Astrophysics Data System (ADS)

    Alpers, Andreas; Gritzmann, Peter

    2018-03-01

    We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.

  16. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  17. Integrated risk/cost planning models for the US Air Traffic system

    NASA Technical Reports Server (NTRS)

    Mulvey, J. M.; Zenios, S. A.

    1985-01-01

    A prototype network planning model for the U.S. Air Traffic control system is described. The model encompasses the dual objectives of managing collision risks and transportation costs where traffic flows can be related to these objectives. The underlying structure is a network graph with nonseparable convex costs; the model is solved efficiently by capitalizing on its intrinsic characteristics. Two specialized algorithms for solving the resulting problems are described: (1) truncated Newton, and (2) simplicial decomposition. The feasibility of the approach is demonstrated using data collected from a control center in the Midwest. Computational results with different computer systems are presented, including a vector supercomputer (CRAY-XMP). The risk/cost model has two primary uses: (1) as a strategic planning tool using aggregate flight information, and (2) as an integrated operational system for forecasting congestion and monitoring (controlling) flow throughout the U.S. In the latter case, access to a supercomputer is required due to the model's enormous size.

  18. The use of hydrogen for aircraft propulsion in view of the fuel crisis.

    NASA Technical Reports Server (NTRS)

    Weiss, S.

    1973-01-01

    In view of projected decreases in available petroleum fuels, interest has been generated in exploiting the potential of liquid hydrogen (LH2) as an aircraft fuel. Cost studies of LH2 production show it to be more expensive than presently used fuels. Regardless of cost considerations, LH2 is viewed as an attractive aircraft fuel because of the potential performance benefits it offers. Accompanying these benefits, however, are many new problems associated with aircraft design and operations; for example, problems related to fuel system design and the handling of LH2 during ground servicing. Some of the factors influencing LH2 fuel tank design, pumping, heat exchange, and flow regulation are discussed.

  19. Further theoretical studies of modified cyclone separator as a diesel soot particulate emission arrester.

    PubMed

    Mukhopadhyay, N; Bose, P K

    2009-10-01

    Soot particulate emission reduction from diesel engine is one of the most emerging problems associated with the exhaust pollution. Diesel particulate filters (DPF) hold out the prospects of substantially reducing regulated particulate emissions but the question of the reliable regeneration of filters still remains a difficult hurdle to overcome. Many of the solutions proposed to date suffer from design complexity, cost, regeneration problem and energy demands. This study presents a computer aided theoretical analysis for controlling diesel soot particulate emission by cyclone separator--a non contact type particulate removal system considering outer vortex flow, inner vortex flow and packed ceramic fiber filter at the end of vortex finder tube. Cyclone separator with low initial cost, simple construction produces low back pressure and reasonably high collection efficiencies with reduced regeneration problems. Cyclone separator is modified by placing a continuous ceramic packed fiber filter placed at the end of the vortex finder tube. In this work, the grade efficiency model of diesel soot particulate emission is proposed considering outer vortex, inner vortex and the continuous ceramic packed fiber filter. Pressure drop model is also proposed considering the effect of the ceramic fiber filter. Proposed model gives reasonably good collection efficiency with permissible pressure drop limit of diesel engine operation. Theoretical approach is predicted for calculating the cut size diameter considering the effect of Cunningham molecular slip correction factor. The result shows good agreements with existing cyclone and DPF flow characteristics.

  20. Intercell scheduling: A negotiation approach using multi-agent coalitions

    NASA Astrophysics Data System (ADS)

    Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde

    2016-10-01

    Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

  1. GPU computing of compressible flow problems by a meshless method with space-filling curves

    NASA Astrophysics Data System (ADS)

    Ma, Z. H.; Wang, H.; Pu, S. H.

    2014-04-01

    A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.

  2. Evolutionary and social consequences of introgression of nontransgenic herbicide resistance from rice to weedy rice in Brazil.

    PubMed

    Merotto, Aldo; Goulart, Ives C G R; Nunes, Anderson L; Kalsing, Augusto; Markus, Catarine; Menezes, Valmir G; Wander, Alcido E

    2016-08-01

    Several studies have expressed concerns about the effects of gene flow from transgenic herbicide-resistant crops to their wild relatives, but no major problems have been observed. This review describes a case study in which what has been feared in transgenics regarding gene flow has actually changed biodiversity and people's lives. Nontransgenic imidazolinone-resistant rice (IMI-rice) cultivars increased the rice grain yield by 50% in southern Brazil. This increase was beneficial for life quality of the farmers and also improved the regional economy. However, weedy rice resistant to imidazolinone herbicides started to evolve three years after the first use of IMI-rice cultivars. Population genetic studies indicate that the herbicide-resistant weedy rice was mainly originated from gene flow from resistant cultivars and distributed by seed migration. The problems related with herbicide-resistant weedy rice increased the production costs of rice that forced farmers to sell or rent their land. Gene flow from cultivated rice to weedy rice has proven to be a large agricultural, economic, and social constraint in the use of herbicide-resistant technologies in rice. This problem must be taken into account for the development of new transgenic or nontransgenic rice technologies.

  3. The Average Network Flow Problem: Shortest Path and Minimum Cost Flow Formulations, Algorithms, Heuristics, and Complexity

    DTIC Science & Technology

    2012-09-13

    Jordan, Captain, USAF AFIT/DS/ENS/12-09 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright- Patterson Air Force Base...Way, Wright- Patterson AFB, Ohio, 45433, USA, +1 937-255-3636, jeremy.jordan@afit.edu jeffery.weir@afit.edu doral.sandlin@afit.edu 1.1 Abstract United...Technology 2950 Hobson Way, Wright- Patterson AFB, Ohio, 45433, USA, +1 937-255-3636, jeremy.jordan@afit.edu jeffery.weir@afit.edu doral.sandlin@afit.edu

  4. Commentary: Environmental nanophotonics and energy

    NASA Astrophysics Data System (ADS)

    Smith, Geoff B.

    2011-01-01

    The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.

  5. IMPROVED CORROSION RESISTANCE OF ALUMINA REFRACTORIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John P. Hurley; Patty L. Kleven

    2001-09-30

    The initial objective of this project was to do a literature search to define the problems of refractory selection in the metals and glass industries. The problems fall into three categories: Economic--What do the major problems cost the industries financially? Operational--How do the major problems affect production efficiency and impact the environment? and Scientific--What are the chemical and physical mechanisms that cause the problems to occur? This report presents a summary of these problems. It was used to determine the areas in which the EERC can provide the most assistance through bench-scale and laboratory testing. The final objective of thismore » project was to design and build a bench-scale high-temperature controlled atmosphere dynamic corrosion application furnace (CADCAF). The furnace will be used to evaluate refractory test samples in the presence of flowing corrodents for extended periods, to temperatures of 1600 C under controlled atmospheres. Corrodents will include molten slag, steel, and glass. This test should prove useful for the glass and steel industries when faced with the decision of choosing the best refractory for flowing corrodent conditions.« less

  6. A reduced-dimensional model for near-wall transport in cardiovascular flows

    PubMed Central

    Hansen, Kirk B.

    2015-01-01

    Near-wall mass transport plays an important role in many cardiovascular processes, including the initiation of atherosclerosis, endothelial cell vasoregulation, and thrombogenesis. These problems are characterized by large Péclet and Schmidt numbers as well as a wide range of spatial and temporal scales, all of which impose computational difficulties. In this work, we develop an analytical relationship between the flow field and near-wall mass transport for high-Schmidt-number flows. This allows for the development of a wall-shear-stress-driven transport equation that lies on a codimension-one vessel-wall surface, significantly reducing computational cost in solving the transport problem. Separate versions of this equation are developed for the reaction-rate-limited and transport-limited cases, and numerical results in an idealized abdominal aortic aneurysm are compared to those obtained by solving the full transport equations over the entire domain. The reaction-rate-limited model matches the expected results well. The transport-limited model is accurate in the developed flow regions, but overpredicts wall flux at entry regions and reattachment points in the flow. PMID:26298313

  7. Controlling groundwater pumping online.

    PubMed

    Zekri, Slim

    2009-08-01

    Groundwater over-pumping is a major problem in several countries around the globe. Since controlling groundwater pumping through water flow meters is hardly feasible, the surrogate is to control electricity usage. This paper presents a framework to restrict groundwater pumping by implementing an annual individual electricity quota without interfering with the electricity pricing policy. The system could be monitored online through prepaid electricity meters. This provides low transaction costs of individual monitoring of users compared to the prohibitive costs of water flow metering and monitoring. The public groundwater managers' intervention is thus required to determine the water and electricity quota and watch the electricity use online. The proposed framework opens the door to the establishment of formal groundwater markets among users at very low transaction costs. A cost-benefit analysis over a 25-year period is used to evaluate the cost of non-action and compare it to the prepaid electricity quota framework in the Batinah coastal area of Oman. Results show that the damage cost to the community, if no active policy is implemented, amounts to (-$288) million. On the other hand, the implementation of a prepaid electricity quota with an online management system would result in a net present benefit of $199 million.

  8. Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige

    Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.

  9. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  10. Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1993-01-01

    Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.

  11. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  12. Analysis and evaluation of an integrated laminar flow control propulsion system

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Dewitt, Kenneth J.

    1993-01-01

    Reduction of drag has been a major goal of the aircraft industry as no other single quantity influences the operating costs of transport aircraft more than aerodynamic drag. It has been estimated that even modest reduction of frictional drag could reduce fuel costs by anywhere from 2 to 5 percent. Current research on boundary layer drag reduction deals with various approaches to reduce turbulent skin friction drag as a means of improving aircraft performance. One of the techniques belonging to this category is laminar flow control in which extensive regions of laminar flow are maintained over aircraft surfaces by delaying transition to turbulence through the ingestion of boundary layer air. While problems of laminar flow control have been studied in some detail, the prospect of improving the propulsion system of an aircraft by the use of ingested boundary layer air has received very little attention. An initial study for the purpose of reducing propulsion system requirements by utilizing the kinetic energy of boundary layer air was performed in the mid-1970's at LeRC. This study which was based on ingesting the boundary layer air at a single location, did not yield any significant overall propulsion benefits; therefore, the concept was not pursued further. However, since then it has been proposed that if the boundary layer air were ingested at various locations on the aircraft surface instead of just at one site, an improvement in the propulsion system might be realized. The present report provides a review of laminar flow control by suction and focuses on the problems of reducing skin friction drag by maintaining extensive regions of laminar flow over the aircraft surfaces. In addition, it includes an evaluation of an aircraft propulsion system that is augmented by ingested boundary layer air.

  13. Factors influencing analysis of complex cognitive tasks: a framework and example from industrial process control.

    PubMed

    Prietula, M J; Feltovich, P J; Marchak, F

    2000-01-01

    We propose that considering four categories of task factors can facilitate knowledge elicitation efforts in the analysis of complex cognitive tasks: materials, strategies, knowledge characteristics, and goals. A study was conducted to examine the effects of altering aspects of two of these task categories on problem-solving behavior across skill levels: materials and goals. Two versions of an applied engineering problem were presented to expert, intermediate, and novice participants. Participants were to minimize the cost of running a steam generation facility by adjusting steam generation levels and flows. One version was cast in the form of a dynamic, computer-based simulation that provided immediate feedback on flows, costs, and constraint violations, thus incorporating key variable dynamics of the problem context. The other version was cast as a static computer-based model, with no dynamic components, cost feedback, or constraint checking. Experts performed better than the other groups across material conditions, and, when required, the presentation of the goal assisted the experts more than the other groups. The static group generated richer protocols than the dynamic group, but the dynamic group solved the problem in significantly less time. Little effect of feedback was found for intermediates, and none for novices. We conclude that demonstrating differences in performance in this task requires different materials than explicating underlying knowledge that leads to performance. We also conclude that substantial knowledge is required to exploit the information yielded by the dynamic form of the task or the explicit solution goal. This simple model can help to identify the contextual factors that influence elicitation and specification of knowledge, which is essential in the engineering of joint cognitive systems.

  14. Take-Home Experiments in Undergraduate Fluid Mechanics Education

    NASA Astrophysics Data System (ADS)

    Cimbala, John

    2007-11-01

    Hands-on take-home experiments, assigned as homework, are useful as supplements to traditional in-class demonstrations and laboratories. Students borrow the equipment from the department's equipment room, and perform the experiment either at home or in the student lounge or student shop work area. Advantages include: (1) easy implementation, especially for large classes, (2) low cost and easy duplication of multiple units, (3) no loss of lecture time since the take-home experiment is self-contained with all necessary instructions, and (4) negligible increase in student or teaching assistant work load since the experiment is assigned as a homework problem in place of a traditional pen and paper problem. As an example, a pump flow take-home experiment was developed, implemented, and assessed in our introductory junior-level fluid mechanics course at Penn State. The experimental apparatus consists of a bucket, tape measure, submersible aquarium pump, tubing, measuring cup, and extension cord. We put together twenty sets at a total cost of less than 20 dollars per set. Students connect the tube to the pump outlet, submerge the pump in water, and measure the volume flow rate produced at various outflow elevations. They record and plot volume flow rate as a function of outlet elevation, and compare with predictions based on the manufacturer's pump performance curve (head versus volume flow rate) and flow losses. The homework assignment includes an online pre-test and post-test to assess the change in students' understanding of the principles of pump performance. The results of the assessment support a significant learning gain following the completion of the take-home experiment.

  15. Acoustic emission data assisted process monitoring.

    PubMed

    Yen, Gary G; Lu, Haiming

    2002-07-01

    Gas-liquid two-phase flows are widely used in the chemical industry. Accurate measurements of flow parameters, such as flow regimes, are the key of operating efficiency. Due to the interface complexity of a two-phase flow, it is very difficult to monitor and distinguish flow regimes on-line and real time. In this paper we propose a cost-effective and computation-efficient acoustic emission (AE) detection system combined with artificial neural network technology to recognize four major patterns in an air-water vertical two-phase flow column. Several crucial AE parameters are explored and validated, and we found that the density of acoustic emission events and ring-down counts are two excellent indicators for the flow pattern recognition problems. Instead of the traditional Fair map, a hit-count map is developed and a multilayer Perceptron neural network is designed as a decision maker to describe an approximate transmission stage of a given two-phase flow system.

  16. An exact algorithm for optimal MAE stack filter design.

    PubMed

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  17. Dynamic Flow Management Problems in Air Transportation

    NASA Technical Reports Server (NTRS)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer programming formulation, the solution of which generates feasible and near-optimal routes for individual flights. The algorithm, termed the Lagrangian Generation Algorithm, is used to solve practical problems in the southwestern portion of United States in which the solutions are within 1% of the corresponding lower bounds.

  18. A biologically inspired network design model.

    PubMed

    Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T S; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I; Sirakoulis, Georgios Ch; Mahadevan, Sankaran

    2015-06-04

    A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach.

  19. A Biologically Inspired Network Design Model

    PubMed Central

    Zhang, Xiaoge; Adamatzky, Andrew; Chan, Felix T.S.; Deng, Yong; Yang, Hai; Yang, Xin-She; Tsompanas, Michail-Antisthenis I.; Sirakoulis, Georgios Ch.; Mahadevan, Sankaran

    2015-01-01

    A network design problem is to select a subset of links in a transport network that satisfy passengers or cargo transportation demands while minimizing the overall costs of the transportation. We propose a mathematical model of the foraging behaviour of slime mould P. polycephalum to solve the network design problem and construct optimal transport networks. In our algorithm, a traffic flow between any two cities is estimated using a gravity model. The flow is imitated by the model of the slime mould. The algorithm model converges to a steady state, which represents a solution of the problem. We validate our approach on examples of major transport networks in Mexico and China. By comparing networks developed in our approach with the man-made highways, networks developed by the slime mould, and a cellular automata model inspired by slime mould, we demonstrate the flexibility and efficiency of our approach. PMID:26041508

  20. Computational aerodynamics requirements: The future role of the computer and the needs of the aerospace industry

    NASA Technical Reports Server (NTRS)

    Rubbert, P. E.

    1978-01-01

    The commercial airplane builder's viewpoint on the important issues involved in the development of improved computational aerodynamics tools such as powerful computers optimized for fluid flow problems is presented. The primary user of computational aerodynamics in a commercial aircraft company is the design engineer who is concerned with solving practical engineering problems. From his viewpoint, the development of program interfaces and pre-and post-processing capability for new computational methods is just as important as the algorithms and machine architecture. As more and more details of the entire flow field are computed, the visibility of the output data becomes a major problem which is then doubled when a design capability is added. The user must be able to see, understand, and interpret the results calculated. Enormous costs are expanded because of the need to work with programs having only primitive user interfaces.

  1. Skin friction enhancement in a model problem of undulatory swimming

    NASA Astrophysics Data System (ADS)

    Ehrenstein, Uwe; Eloy, Christophe

    2013-10-01

    To calculate the energy costs of swimming, it is crucial to evaluate the drag force originating from skin friction. In this paper we examine the assumption, known as the 'Bone-Lighthill boundary-layer thinning hypothesis', that undulatory swimming motions induce a drag increase because of the compression of the boundary layer. Studying analytically an incoming flow along a flat plate moving at a normal velocity as a limit case of a yawed cylinder in uniform flow under the laminar boundary layer assumption, we demonstrate that the longitudinal drag scales as the square root of the normal velocity component. This analytical prediction is interpreted in the light of a three-dimensional numerical simulation result for a plate of finite length and width. An analogous two-dimensional Navier-Stokes problem by artificially accelerating the flow in a channel of finite height is proposed and solved numerically, showing the robustness of the analytical results. Solving the problem for an undulatory plate motion similar to fish swimming, we find a drag enhancement which can be estimated to be of the order of 20 %.

  2. Computational reduction strategies for the detection of steady bifurcations in incompressible fluid-dynamics: Applications to Coanda effect in cardiology

    NASA Astrophysics Data System (ADS)

    Pitton, Giuseppe; Quaini, Annalisa; Rozza, Gianluigi

    2017-09-01

    We focus on reducing the computational costs associated with the hydrodynamic stability of solutions of the incompressible Navier-Stokes equations for a Newtonian and viscous fluid in contraction-expansion channels. In particular, we are interested in studying steady bifurcations, occurring when non-unique stable solutions appear as physical and/or geometric control parameters are varied. The formulation of the stability problem requires solving an eigenvalue problem for a partial differential operator. An alternative to this approach is the direct simulation of the flow to characterize the asymptotic behavior of the solution. Both approaches can be extremely expensive in terms of computational time. We propose to apply Reduced Order Modeling (ROM) techniques to reduce the demanding computational costs associated with the detection of a type of steady bifurcations in fluid dynamics. The application that motivated the present study is the onset of asymmetries (i.e., symmetry breaking bifurcation) in blood flow through a regurgitant mitral valve, depending on the Reynolds number and the regurgitant mitral valve orifice shape.

  3. A novel methodology for determining low-cost fine particulate matter street sweeping routes.

    PubMed

    Blazquez, Carola A; Beghelli, Alejandra; Meneses, Veronica P

    2012-02-01

    This paper addresses the problem of low-cost PM10 (particulate matter with aerodynamic diameter < 10 microm) street sweeping route. In order to do so, only a subset of the streets of the urban area to be swept is selected for sweeping, based on their PM10 emission factor values. Subsequently, a low-cost route that visits each street in the set is computed. Unlike related problems of waste collection where streets must be visited once (Chinese or Rural Postman Problem, respectively), in this case, the sweeping vehicle route must visit each selected street exactly as many times as its number of street sides, since the vehicle can sweep only one street side at a time. Additionally, the route must comply with traffic flow and turn constraints. A novel transformation of the original arc routing problem into a node routing problem is proposed in this paper. This is accomplished by building a graph that represents the area to sweep in such a way that the problem can be solved by applying any known solution to the Traveling Salesman Problem (TSP). As a way of illustration, the proposed method was applied to the northeast area of the Municipality of Santiago (Chile). Results show that the proposed methodology achieved up to 37% savings in kilometers traveled by the sweeping vehicle when compared to the solution obtained by solving the TSP problem with Geographic Information Systems (GIS)--aware tools.

  4. WEEE flow and mitigating measures in China.

    PubMed

    Yang, Jianxin; Lu, Bin; Xu, Cheng

    2008-01-01

    The research presented in this paper shows that Waste Electrical and Electronic Equipment (WEEE) issues associated with home appliances, such as TV sets, refrigerators, washing machines, air conditioners, and personal computers, are linked in the WEEE flow and recycling systems and are important to matters of public policy and regulation. In this paper, the sources and generation of WEEE in China are identified, and WEEE volumes are calculated. The results show that recycling capacity must increase if the rising quantity of domestic WEEE is to be handled properly. Simultaneously, suitable WEEE treatment will generate large volumes of secondary resources. Environmental problems caused by the existing recycling processes have been investigated in a case study. Problems mainly stem from open burning of plastic-metal parts and from precious metals leaching techniques that utilize acids. The existing WEEE flow at the national level was investigated and described. It became obvious that a considerable amount of obsolete items are stored in homes and offices and have not yet entered the recycling system. The reuse of used appliances has become a high priority for WEEE collectors and dealers because reuse generates higher economic profits than simple material recovery. The results of a cost analysis of WEEE flow shows that management and collection costs significantly influence current WEEE management. Heated discussions are ongoing in political and administrative bodies as to whether extended producer responsibilities policies are promoting WEEE recycling and management. This paper also discusses future challenges and strategies for WEEE management in China.

  5. Model reduction of the numerical analysis of Low Impact Developments techniques

    NASA Astrophysics Data System (ADS)

    Brunetti, Giuseppe; Šimůnek, Jirka; Wöhling, Thomas; Piro, Patrizia

    2017-04-01

    Mechanistic models have proven to be accurate and reliable tools for the numerical analysis of the hydrological behavior of Low Impact Development (LIDs) techniques. However, their widespread adoption is limited by their complexity and computational cost. Recent studies have tried to address this issue by investigating the application of new techniques, such as surrogate-based modeling. However, current results are still limited and fragmented. One of such approaches, the Model Order Reduction (MOR) technique, can represent a valuable tool for reducing the computational complexity of a numerical problems by computing an approximation of the original model. While this technique has been extensively used in water-related problems, no studies have evaluated its use in LIDs modeling. Thus, the main aim of this study is to apply the MOR technique for the development of a reduced order model (ROM) for the numerical analysis of the hydrologic behavior of LIDs, in particular green roofs. The model should be able to correctly reproduce all the hydrological processes of a green roof while reducing the computational cost. The proposed model decouples the subsurface water dynamic of a green roof in a) one-dimensional (1D) vertical flow through a green roof itself and b) one-dimensional saturated lateral flow along the impervious rooftop. The green roof is horizontally discretized in N elements. Each element represents a vertical domain, which can have different properties or boundary conditions. The 1D Richards equation is used to simulate flow in the substrate and drainage layers. Simulated outflow from the vertical domain is used as a recharge term for saturated lateral flow, which is described using the kinematic wave approximation of the Boussinesq equation. The proposed model has been compared with the mechanistic model HYDRUS-2D, which numerically solves the Richards equation for the whole domain. The HYDRUS-1D code has been used for the description of vertical flow, while a Finite Volume Scheme has been adopted for lateral flow. Two scenarios involving flat and steep green roofs were analyzed. Results confirmed the accuracy of the reduced order model, which was able to reproduce both subsurface outflow and the moisture distribution in the green roof, significantly reducing the computational cost.

  6. Simulation modeling for the health care manager.

    PubMed

    Kennedy, Michael H

    2009-01-01

    This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.

  7. Aerodynamic Design on Unstructured Grids for Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Bonhaus, Daryl L.

    1997-01-01

    An aerodynamic design algorithm for turbulent flows using unstructured grids is described. The current approach uses adjoint (costate) variables for obtaining derivatives of the cost function. The solution of the adjoint equations is obtained using an implicit formulation in which the turbulence model is fully coupled with the flow equations when solving for the costate variables. The accuracy of the derivatives is demonstrated by comparison with finite-difference gradients and a few example computations are shown. In addition, a user interface is described which significantly reduces the time required for setting up the design problems. Recommendations on directions of further research into the Navier Stokes design process are made.

  8. Inclusion of tank configurations as a variable in the cost optimization of branched piped-water networks

    NASA Astrophysics Data System (ADS)

    Hooda, Nikhil; Damani, Om

    2017-06-01

    The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.

  9. Interpolating between random walks and optimal transportation routes: Flow with multiple sources and targets

    NASA Astrophysics Data System (ADS)

    Guex, Guillaume

    2016-05-01

    In recent articles about graphs, different models proposed a formalism to find a type of path between two nodes, the source and the target, at crossroads between the shortest-path and the random-walk path. These models include a freely adjustable parameter, allowing to tune the behavior of the path toward randomized movements or direct routes. This article presents a natural generalization of these models, namely a model with multiple sources and targets. In this context, source nodes can be viewed as locations with a supply of a certain good (e.g. people, money, information) and target nodes as locations with a demand of the same good. An algorithm is constructed to display the flow of goods in the network between sources and targets. With again a freely adjustable parameter, this flow can be tuned to follow routes of minimum cost, thus displaying the flow in the context of the optimal transportation problem or, by contrast, a random flow, known to be similar to the electrical current flow if the random-walk is reversible. Moreover, a source-targetcoupling can be retrieved from this flow, offering an optimal assignment to the transportation problem. This algorithm is described in the first part of this article and then illustrated with case studies.

  10. Alkaline quinone flow battery.

    PubMed

    Lin, Kaixiang; Chen, Qing; Gerhardt, Michael R; Tong, Liuchuan; Kim, Sang Bok; Eisenach, Louise; Valle, Alvaro W; Hardee, David; Gordon, Roy G; Aziz, Michael J; Marshak, Michael P

    2015-09-25

    Storage of photovoltaic and wind electricity in batteries could solve the mismatch problem between the intermittent supply of these renewable resources and variable demand. Flow batteries permit more economical long-duration discharge than solid-electrode batteries by using liquid electrolytes stored outside of the battery. We report an alkaline flow battery based on redox-active organic molecules that are composed entirely of Earth-abundant elements and are nontoxic, nonflammable, and safe for use in residential and commercial environments. The battery operates efficiently with high power density near room temperature. These results demonstrate the stability and performance of redox-active organic molecules in alkaline flow batteries, potentially enabling cost-effective stationary storage of renewable energy. Copyright © 2015, American Association for the Advancement of Science.

  11. Using lean manufacturing principles to evaluate wait times for HIV-positive patients in an urban clinic in Kenya.

    PubMed

    Monroe-Wise, Aliza; Reisner, Elizabeth; Sherr, Kenneth; Ojakaa, David; Mbau, Lilian; Kisia, Paul; Muhula, Samuel; Farquhar, Carey

    2017-12-01

    As human immunodeficiency virus (HIV) treatment programs expand in Africa, delivery systems must be strengthened to support patient retention. Clinic characteristics may affect retention, but a relationship between clinic flow and attrition is not established. This project characterized HIV patient experience and flow in an urban Kenyan clinic to understand how these may affect retention. We used Toyota's lean manufacturing principles to guide data collection and analysis. Clinic flow was evaluated using value stream mapping and time and motion techniques. Clinic register data were analyzed. Two focus group discussions were held to characterize HIV patient experience. Results were shared with clinic staff. Wait times in the clinic were highly variable. We identified four main barriers to patient flow: inconsistent patient arrivals, inconsistent staffing, filing system defects, and serving patients out of order. Focus group participants explained how clinic operations affected their ability to engage in care. Clinic staff were eager to discuss the problems identified and identified numerous low-cost potential solutions. Lean manufacturing methodologies can guide efficiency interventions in low-resource healthcare settings. Using lean techniques, we identified bottlenecks to clinic flow and low-cost solutions to improve wait times. Improving flow may result in increased patient satisfaction and retention.

  12. Brief Communication: A low-cost Arduino®-based wire extensometer for earth flow monitoring

    NASA Astrophysics Data System (ADS)

    Guerriero, Luigi; Guerriero, Giovanni; Grelle, Gerardo; Guadagno, Francesco M.; Revellino, Paola

    2017-06-01

    Continuous monitoring of earth flow displacement is essential for the understanding of the dynamic of the process, its ongoing evolution and designing mitigation measures. Despite its importance, it is not always applied due to its expense and the need for integration with additional sensors to monitor factors controlling movement. To overcome these problems, we developed and tested a low-cost Arduino-based wire-rail extensometer integrating a data logger, a power system and multiple digital and analog inputs. The system is equipped with a high-precision position transducer that in the test configuration offers a measuring range of 1023 mm and an associated accuracy of ±1 mm, and integrates an operating temperature sensor that should allow potential thermal drift that typically affects this kind of systems to be identified and corrected. A field test, conducted at the Pietrafitta earth flow where additional monitoring systems had been installed, indicates a high reliability of the measurement and a high monitoring stability without visible thermal drift.

  13. Performance and cavitation characteristics of bi-directional hydrofoils

    NASA Astrophysics Data System (ADS)

    Nedyalkov, Ivaylo; Wosnik, Martin

    2013-11-01

    Tidal turbines extract energy from flows which reverse direction. One way to address this bi-directionality in horizontal axis turbines that avoid the use of complex and maintenance-intensive yaw or blade pitch mechanisms, is to design bi-directional blades which perform (equally) well in either flow direction. A large number of proposed hydrofoil designs were investigated using numerical simulations. Selected candidate foils were also tested (at various speeds and angles of attack) in the High-Speed Cavitation Tunnel (HICaT) at the University of New Hampshire. Lift and drag were measured using a force balance, and cavitation inception and desinence were recorded. Experimental and numerical results were compared, and the foils were compared to each other and to reference foils. Bi-directional hydrofoils may provide a feasible solution to the problem of reversing flow direction, when their performance and cavitation characteristics are comparable to those for unidirectional foils, and the penalty in decreased energy production is outweighed by the cost reduction due to lower complexity and respectively lower installation and maintenance costs.

  14. Intraventricular vector flow mapping—a Doppler-based regularized problem with automatic model selection

    NASA Astrophysics Data System (ADS)

    Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien

    2017-09-01

    We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.

  15. Edgelist phase unwrapping algorithm for time series InSAR analysis.

    PubMed

    Shanker, A Piyush; Zebker, Howard

    2010-03-01

    We present here a new integer programming formulation for phase unwrapping of multidimensional data. Phase unwrapping is a key problem in many coherent imaging systems, including time series synthetic aperture radar interferometry (InSAR), with two spatial and one temporal data dimensions. The minimum cost flow (MCF) [IEEE Trans. Geosci. Remote Sens. 36, 813 (1998)] phase unwrapping algorithm describes a global cost minimization problem involving flow between phase residues computed over closed loops. Here we replace closed loops by reliable edges as the basic construct, thus leading to the name "edgelist." Our algorithm has several advantages over current methods-it simplifies the representation of multidimensional phase unwrapping, it incorporates data from external sources, such as GPS, where available to better constrain the unwrapped solution, and it treats regularly sampled or sparsely sampled data alike. It thus is particularly applicable to time series InSAR, where data are often irregularly spaced in time and individual interferograms can be corrupted with large decorrelated regions. We show that, similar to the MCF network problem, the edgelist formulation also exhibits total unimodularity, which enables us to solve the integer program problem by using efficient linear programming tools. We apply our method to a persistent scatterer-InSAR data set from the creeping section of the Central San Andreas Fault and find that the average creep rate of 22 mm/Yr is constant within 3 mm/Yr over 1992-2004 but varies systematically with ground location, with a slightly higher rate in 1992-1998 than in 1999-2003.

  16. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  17. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  18. A Three-Dimensional Finite-Element Model for Simulating Water Flow in Variably Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.

    1986-12-01

    A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.

  19. Effects of acuity-adaptable rooms on flow of patients and delivery of care.

    PubMed

    Hendrich, Ann L; Fay, Joy; Sorrells, Amy K

    2004-01-01

    Delayed transfers of patients between nursing units and lack of available beds are significant problems that increase costs and decrease quality of care and satisfaction among patients and staff. To test whether use of acuity-adaptable rooms helps solve problems with transfers of patients, satisfaction levels, and medical errors. A pre-post method was used to compare the effects of environmental design on various clinical and financial measures. Twelve outcome-based questions were formulated as the basis for inquiry. Two years of baseline data were collected before the unit moved and were compared with 3 years of data collected after the move. Significant improvements in quality and operational cost occurred after the move, including a large reduction in clinician handoffs and transfers; reductions in medication error and patient fall indexes; improvements in predictive indicators of patients' satisfaction; decrease in budgeted nursing hours per patient day and increased available nursing time for direct care without added cost; increase in patient days per bed, with a smaller bed base (number of beds per patient days). Some staff turnover occurred during the first year; turnover stabilized thereafter. Data in 5 key areas (flow of patients and hospital capacity, patients' dissatisfaction, sentinel events, mean length of stay, and allocation of nursing productivity) appear to be sufficient to test the business case for future investment in partial or complete replication of this model with appropriate populations of patients.

  20. Multi-Objective Differential Evolution for Voltage Security Constrained Optimal Power Flow in Deregulated Power Systems

    NASA Astrophysics Data System (ADS)

    Roselyn, J. Preetha; Devaraj, D.; Dash, Subhransu Sekhar

    2013-11-01

    Voltage stability is an important issue in the planning and operation of deregulated power systems. The voltage stability problems is a most challenging one for the system operators in deregulated power systems because of the intense use of transmission line capabilities and poor regulation in market environment. This article addresses the congestion management problem avoiding offline transmission capacity limits related to voltage stability by considering Voltage Security Constrained Optimal Power Flow (VSCOPF) problem in deregulated environment. This article presents the application of Multi Objective Differential Evolution (MODE) algorithm to solve the VSCOPF problem in new competitive power systems. The maximum of L-index of the load buses is taken as the indicator of voltage stability and is incorporated in the Optimal Power Flow (OPF) problem. The proposed method in hybrid power market which also gives solutions to voltage stability problems by considering the generation rescheduling cost and load shedding cost which relieves the congestion problem in deregulated environment. The buses for load shedding are selected based on the minimum eigen value of Jacobian with respect to the load shed. In the proposed approach, real power settings of generators in base case and contingency cases, generator bus voltage magnitudes, real and reactive power demands of selected load buses using sensitivity analysis are taken as the control variables and are represented as the combination of floating point numbers and integers. DE/randSF/1/bin strategy scheme of differential evolution with self-tuned parameter which employs binomial crossover and difference vector based mutation is used for the VSCOPF problem. A fuzzy based mechanism is employed to get the best compromise solution from the pareto front to aid the decision maker. The proposed VSCOPF planning model is implemented on IEEE 30-bus system, IEEE 57 bus practical system and IEEE 118 bus system. The pareto optimal front obtained from MODE is compared with reference pareto front and the best compromise solution for all the cases are obtained from fuzzy decision making strategy. The performance measures of proposed MODE in two test systems are calculated using suitable performance metrices. The simulation results show that the proposed approach provides considerable improvement in the congestion management by generation rescheduling and load shedding while enhancing the voltage stability in deregulated power system.

  1. Impact of Uncertainty from Load-Based Reserves and Renewables on Dispatch Costs and Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Bowen; Maroukis, Spencer D.; Lin, Yashen

    2016-11-21

    Aggregations of controllable loads are considered to be a fast-responding, cost-efficient, and environmental-friendly candidate for power system ancillary services. Unlike conventional service providers, the potential capacity from the aggregation is highly affected by factors like ambient conditions and load usage patterns. Previous work modeled aggregations of controllable loads (such as air conditioners) as thermal batteries, which are capable of providing reserves but with uncertain capacity. A stochastic optimal power flow problem was formulated to manage this uncertainty, as well as uncertainty in renewable generation. In this paper, we explore how the types and levels of uncertainty, generation reserve costs, andmore » controllable load capacity affect the dispatch solution, operational costs, and CO2 emissions. We also compare the results of two methods for solving the stochastic optimization problem, namely the probabilistically robust method and analytical reformulation assuming Gaussian distributions. Case studies are conducted on a modified IEEE 9-bus system with renewables, controllable loads, and congestion. We find that different types and levels of uncertainty have significant impacts on dispatch and emissions. More controllable loads and less conservative solution methodologies lead to lower costs and emissions.« less

  2. AQMAN; linear and quadratic programming matrix generator using two-dimensional ground-water flow simulation for aquifer management modeling

    USGS Publications Warehouse

    Lefkoff, L.J.; Gorelick, S.M.

    1987-01-01

    A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)

  3. Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew; Allen, Edwin (Technical Monitor)

    1998-01-01

    Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.

  4. Optimizing sterilization logistics in hospitals.

    PubMed

    van de Klundert, Joris; Muls, Philippe; Schadd, Maarten

    2008-03-01

    This paper deals with the optimization of the flow of sterile instruments in hospitals which takes place between the sterilization department and the operating theatre. This topic is especially of interest in view of the current attempts of hospitals to cut cost by outsourcing sterilization tasks. Oftentimes, outsourcing implies placing the sterilization unit at a larger distance, hence introducing a longer logistic loop, which may result in lower instrument availability, and higher cost. This paper discusses the optimization problems that have to be solved when redesigning processes so as to improve material availability and reduce cost. We consider changing the logistic management principles, use of visibility information, and optimizing the composition of the nets of sterile materials.

  5. A multiscale fixed stress split iterative scheme for coupled flow and poromechanics in deep subsurface reservoirs

    NASA Astrophysics Data System (ADS)

    Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.

    2018-01-01

    In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.

  6. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  7. Congestion patterns of electric vehicles with limited battery capacity.

    PubMed

    Jing, Wentao; Ramezani, Mohsen; An, Kun; Kim, Inhi

    2018-01-01

    The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm.

  8. Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.; Smooke, Mitchell D.

    1987-01-01

    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.

  9. Congestion patterns of electric vehicles with limited battery capacity

    PubMed Central

    2018-01-01

    The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm. PMID:29543875

  10. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  11. Parametric Study of a YAV-8B Harrier in Ground Effect Using Time-Dependent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.

  12. High order solution of Poisson problems with piecewise constant coefficients and interface jumps

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben

    2017-04-01

    We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.

  13. Numerical investigation of the flow inside the combustion chamber of a plant oil stove

    NASA Astrophysics Data System (ADS)

    Pritz, B.; Werler, M.; Wirbser, H.; Gabi, M.

    2013-10-01

    Recently a low cost cooking device for developing and emerging countries was developed at KIT in cooperation with the company Bosch und Siemens Hausgeräte GmbH. After constructing an innovative basic design further development was required. Numerical investigations were conducted in order to investigate the flow inside the combustion chamber of the stove under variation of different geometrical parameters. Beyond the performance improvement a further reason of the investigations was to rate the effects of manufacturing tolerance problems. In this paper the numerical investigation of a plant oil stove by means of RANS simulation will be presented. In order to reduce the computational costs different model reduction steps were necessary. The simulation results of the basic configuration compare very well with experimental measurements and problematic behaviors of the actual stove design could be explained by the investigation.

  14. Peracetic acid as an alternative disinfection technology for wet weather flows.

    PubMed

    Coyle, Elizabeth E; Ormsbee, Lindell E; Brion, Gail M

    2014-08-01

    Rain-induced wet weather flows (WWFs) consist of combined sewer overflows, sanitary sewer overflows, and stormwater, all of which introduce pathogens to surface waters when discharged. When people come into contact with the contaminated surface water, these pathogens can be transmitted resulting in severe health problems. As such, WWFs should be disinfected. Traditional disinfection technologies are typically cost-prohibitive, can yield toxic byproducts, and space for facilities is often limited, if available. More cost-effective alternative technologies, requiring less space and producing less harmful byproducts are currently being explored. Peracetic acid (PAA) was investigated as one such alternative and this research has confirmed the feasibility and applicability of using PAA as a disinfectant for WWFs. Peracetic acid doses ranging from 5 mg/L to 15 mg/L over contact times of 2 to 10 minutes were shown to be effective and directly applicable to WWF disinfection.

  15. Artificial acoustic stiffness reduction in fully compressible, direct numerical simulation of combustion

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Trouvé, Arnaud

    2004-09-01

    A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.

  16. Effects of Chaos in Peristaltic Flows: Towards Biological Applications

    NASA Astrophysics Data System (ADS)

    Wakeley, Paul W.; Blake, John R.; Smith, David J.; Gaffney, Eamonn A.

    2006-11-01

    One in seven couples in the Western World will have problems conceiving naturally and with the cost of state provided fertility treatment in the United Kingdom being over USD 3Million per annum and a round of treatment paid for privately costing around USD 6000, the desire to understand the mechanisms of infertility is leading to a renewed interest in collaborations between mathematicians and reproductive biologists. Hydrosalpinx is a condition in which the oviduct becomes blocked, fluid filled and dilated. Many women with this condition are infertile and the primary method of treatment is in vitro fertilisation, however, it is found that despite the embryo being implanted into the uterus, the hydrosalpinx adversely affects the implantation rate. We shall consider a mathematical model for peristaltic flow with an emphasis towards modelling the fluid flow in the oviducts and the uterus of humans. We shall consider the effects of chaotic behavior on the system and demonstrate that under certain initial conditions trapping regions can be formed and discuss our results with a view towards understanding the effects of hydrosalpinx.

  17. Mitigating the Hook Effect in Lateral Flow Sandwich Immunoassays Using Real-Time Reaction Kinetics.

    PubMed

    Rey, Elizabeth G; O'Dell, Dakota; Mehta, Saurabh; Erickson, David

    2017-05-02

    The quantification of analyte concentrations using lateral flow assays is a low-cost and user-friendly alternative to traditional lab-based assays. However, sandwich-type immunoassays are often limited by the high-dose hook effect, which causes falsely low results when analytes are present at very high concentrations. In this paper, we present a reaction kinetics-based technique that solves this problem, significantly increasing the dynamic range of these devices. With the use of a traditional sandwich lateral flow immunoassay, a portable imaging device, and a mobile interface, we demonstrate the technique by quantifying C-reactive protein concentrations in human serum over a large portion of the physiological range. The technique could be applied to any hook effect-limited sandwich lateral flow assay and has a high level of accuracy even in the hook effect range.

  18. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  19. Multi-objective optimization of discrete time-cost tradeoff problem in project networks using non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shahriari, Mohammadreza

    2016-06-01

    The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.

  20. Flow simulations about steady-complex and unsteady moving configurations using structured-overlapped and unstructured grids

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1995-01-01

    The limiting factor in simulating flows past realistic configurations of interest has been the discretization of the physical domain on which the governing equations of fluid flow may be solved. In an attempt to circumvent this problem, many Computational Fluid Dynamic (CFD) methodologies that are based on different grid generation and domain decomposition techniques have been developed. However, due to the costs involved and expertise required, very few comparative studies between these methods have been performed. In the present work, the two CFD methodologies which show the most promise for treating complex three-dimensional configurations as well as unsteady moving boundary problems are evaluated. These are namely the structured-overlapped and the unstructured grid schemes. Both methods use a cell centered, finite volume, upwind approach. The structured-overlapped algorithm uses an approximately factored, alternating direction implicit scheme to perform the time integration, whereas, the unstructured algorithm uses an explicit Runge-Kutta method. To examine the accuracy, efficiency, and limitations of each scheme, they are applied to the same steady complex multicomponent configurations and unsteady moving boundary problems. The steady complex cases consist of computing the subsonic flow about a two-dimensional high-lift multielement airfoil and the transonic flow about a three-dimensional wing/pylon/finned store assembly. The unsteady moving boundary problems are a forced pitching oscillation of an airfoil in a transonic freestream and a two-dimensional, subsonic airfoil/store separation sequence. Accuracy was accessed through the comparison of computed and experimentally measured pressure coefficient data on several of the wing/pylon/finned store assembly's components and at numerous angles-of-attack for the pitching airfoil. From this study, it was found that both the structured-overlapped and the unstructured grid schemes yielded flow solutions of comparable accuracy for these simulations. This study also indicated that, overall, the structured-overlapped scheme was slightly more CPU efficient than the unstructured approach.

  1. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  2. Water facilities in retrospect and prospect: An illuminating tool for vehicle design

    NASA Technical Reports Server (NTRS)

    Erickson, G. E.; Peak, D. J.; Delfrate, J.; Skow, A. M.; Malcolm, G. N.

    1986-01-01

    Water facilities play a fundamental role in the design of air, ground, and marine vehicles by providing a qualitative, and sometimes quantitative, description of complex flow phenomena. Water tunnels, channels, and tow tanks used as flow-diagnostic tools have experienced a renaissance in recent years in response to the increased complexity of designs suitable for advanced technology vehicles. These vehicles are frequently characterized by large regions of steady and unsteady three-dimensional flow separation and ensuing vortical flows. The visualization and interpretation of the complicated fluid motions about isolated vehicle components and complete configurations in a time and cost effective manner in hydrodynamic test facilities is a key element in the development of flow control concepts, and, hence, improved vehicle designs. A historical perspective of the role of water facilities in the vehicle design process is presented. The application of water facilities to specific aerodynamic and hydrodynamic flow problems is discussed, and the strengths and limitations of these important experimental tools are emphasized.

  3. CFD Study of Full-Scale Aerobic Bioreactors: Evaluation of Dynamic O2 Distribution, Gas-Liquid Mass Transfer and Reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbird, David; Sitaraman, Hariswaran; Stickel, Jonathan

    If advanced biofuels are to measurably displace fossil fuels in the near term, they will have to operate at levels of scale, efficiency, and margin unprecedented in the current biotech industry. For aerobically-grown products in particular, scale-up is complex and the practical size, cost, and operability of extremely large reactors is not well understood. Put simply, the problem of how to attain fuel-class production scales comes down to cost-effective delivery of oxygen at high mass transfer rates and low capital and operating costs. To that end, very large reactor vessels (>500 m3) are proposed in order to achieve favorable economiesmore » of scale. Additionally, techno-economic evaluation indicates that bubble-column reactors are more cost-effective than stirred-tank reactors in many low-viscosity cultures. In order to advance the design of extremely large aerobic bioreactors, we have performed computational fluid dynamics (CFD) simulations of bubble-column reactors. A multiphase Euler-Euler model is used to explicitly account for the spatial distribution of air (i.e., gas bubbles) in the reactor. Expanding on the existing bioreactor CFD literature (typically focused on the hydrodynamics of bubbly flows), our simulations include interphase mass transfer of oxygen and a simple phenomenological reaction representing the uptake and consumption of dissolved oxygen by submerged cells. The simulations reproduce the expected flow profiles, with net upward flow in the center of column and downward flow near the wall. At high simulated oxygen uptake rates (OUR), oxygen-depleted regions can be observed in the reactor. By increasing the gas flow to enhance mixing and eliminate depleted areas, a maximum oxygen transfer (OTR) rate is obtained as a function of superficial velocity. These insights regarding minimum superficial velocity and maximum reactor size are incorporated into NREL's larger techno-economic models to supplement standard reactor design equations.« less

  4. Outsourcing and scheduling for a two-machine flow shop with release times

    NASA Astrophysics Data System (ADS)

    Ahmadizar, Fardin; Amiri, Zeinab

    2018-03-01

    This article addresses a two-machine flow shop scheduling problem where jobs are released intermittently and outsourcing is allowed. The first operations of outsourced jobs are processed by the first subcontractor, they are transported in batches to the second subcontractor for processing their second operations, and finally they are transported back to the manufacturer. The objective is to select a subset of jobs to be outsourced, to schedule both the in-house and the outsourced jobs, and to determine a transportation plan for the outsourced jobs so as to minimize the sum of the makespan and the outsourcing and transportation costs. Two mathematical models of the problem and several necessary optimality conditions are presented. A solution approach is then proposed by incorporating the dominance properties with an ant colony algorithm. Finally, computational experiments are conducted to evaluate the performance of the models and solution approach.

  5. An immersed boundary method for modeling a dirty geometry data

    NASA Astrophysics Data System (ADS)

    Onishi, Keiji; Tsubokura, Makoto

    2017-11-01

    We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.

  6. Influence of Hall Effect on Magnetic Control of Stagnation Point Heat Transfer

    NASA Astrophysics Data System (ADS)

    Poggie, Jonathan; Gaitonde, Datta

    2001-11-01

    Electromagnetic control is an appealing possibility for mitigating the thermal loads that occur in hypersonic flight. There was extensive research on this technique in the past (up to about 1970), but enthusiasm waned because of problems of system cost and weight. Renewed interest has arisen recently due to developments in the technology of super-conducting magnets and the understanding of the physics of weakly-ionized, non-equilibrium plasmas. A problem of particular interest is the reduction of stagnation point heating during atmospheric entry by magnetic deceleration of the flow in the shock layer. For the case of hypersonic flow over a sphere, a reduction in heat flux has been observed with the application of a dipole magnetic field (Poggie and Gaitonde, AIAA Paper 2001-0196). The Hall effect has a detrimental influence on this control scheme, tending to rotate the current vector out of the circumferential direction and to reduce the impact of the applied magnetic field on the fluid. In the present work we re-examine this problem by using modern computational methods to simulate flow past a hemispherical-nosed vehicle in which a axially-oriented magnetic dipole has been placed. The deleterious effects of the Hall current are characterized, and are observed to diminish when the surface of the vehicle is conducting.

  7. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Li, Weixuan; Zeng, Lingzao

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less

  8. Cell transmission model of dynamic assignment for urban rail transit networks.

    PubMed

    Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian

    2017-01-01

    For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.

  9. The use of hydrogen for aircraft propulsion in view of the fuel crisis

    NASA Technical Reports Server (NTRS)

    Weiss, S.

    1973-01-01

    Some factors influencing the technical feasibility of operating a liquid hydrogen-fueled airplane are discussed in light of the projected decrease of fossil fuels. Other sources of energy, such as wind, tidal, solar, and geothermal, are briefly mentioned. In view of projected decreases in available petroleum fuels, interest has been generated in exploiting the potential of liquid hydrogen (LH2) as an aircraft fuel. Cost studies of LH2 production show it to be more expensive than presently used fuels. Regardless of cost considerations, LH2 is viewed as an attractive aircraft fuel because of the potential performance benefits it offers. Accompanying these benefits, however, are many new problems associated with aircraft design and operations; for example, problems related to fuel system design and the handling of LH2 during ground servicing. Some of the factors influencing LH2 fuel tank design, pumping, heat exchange, and flow regulation are discussed.

  10. Strategic optimisation of microgrid by evolving a unitised regenerative fuel cell system operational criterion

    NASA Astrophysics Data System (ADS)

    Bhansali, Gaurav; Singh, Bhanu Pratap; Kumar, Rajesh

    2016-09-01

    In this paper, the problem of microgrid optimisation with storage has been addressed in an unaccounted way rather than confining it to loss minimisation. Unitised regenerative fuel cell (URFC) systems have been studied and employed in microgrids to store energy and feed it back into the system when required. A value function-dependent on line losses, URFC system operational cost and stored energy at the end of the day are defined here. The function is highly complex, nonlinear and multi dimensional in nature. Therefore, heuristic optimisation techniques in combination with load flow analysis are used here to resolve the network and time domain complexity related with the problem. Particle swarm optimisation with the forward/backward sweep algorithm ensures optimal operation of microgrid thereby minimising the operational cost of the microgrid. Results are shown and are found to be consistently improving with evolution of the solution strategy.

  11. Multichannel quench-flow microreactor chip for parallel reaction monitoring.

    PubMed

    Bula, Wojciech P; Verboom, Willem; Reinhoudt, David N; Gardeniers, Han J G E

    2007-12-01

    This paper describes a multichannel silicon-glass microreactor which has been utilized to investigate the kinetics of a Knoevenagel condensation reaction under different reaction conditions. The reaction is performed on the chip in four parallel channels under identical conditions but with different residence times. A special topology of the reaction coils overcomes the common problem arising from the difference in pressure drop of parallel channels having different length. The parallelization of reaction coils combined with chemical quenching at specific locations results in a considerable reduction in experimental effort and cost. The system was tested and showed good reproducibility in flow properties and reaction kinetic data generation.

  12. A discontinuous control volume finite element method for multi-phase flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Xie, Z.; Osman, H.; Pain, C. C.; Jackson, M. D.

    2018-01-01

    We present a new, high-order, control-volume-finite-element (CVFE) method for multiphase porous media flow with discontinuous 1st-order representation for pressure and discontinuous 2nd-order representation for velocity. The method has been implemented using unstructured tetrahedral meshes to discretize space. The method locally and globally conserves mass. However, unlike conventional CVFE formulations, the method presented here does not require the use of control volumes (CVs) that span the boundaries between domains with differing material properties. We demonstrate that the approach accurately preserves discontinuous saturation changes caused by permeability variations across such boundaries, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at significantly lower computational cost than using conventional CVFE methods. We resolve a long-standing problem associated with the use of classical CVFE methods to model flow in highly heterogeneous porous media.

  13. High-resolution method for evolving complex interface networks

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  14. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    NASA Astrophysics Data System (ADS)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  15. Productivity improvement using industrial engineering tools

    NASA Astrophysics Data System (ADS)

    Salaam, H. A.; How, S. B.; Faisae, M. F.

    2012-09-01

    Minimizing the number of defects is important to any company since it influence their outputs and profits. The aim of this paper is to study the implementation of industrial engineering tools in a manufacturing recycle paper box company. This study starts with reading the standard operation procedures and analyzing the process flow to get the whole idea on how to manufacture paper box. At the same time, observations at the production line were made to identify problem occurs in the production line. By using check sheet, the defect data from each station were collected and have been analyzed using Pareto Chart. From the chart, it is found that glue workstation shows the highest number of defects. Based on observation at the glue workstation, the existing method used to glue the box was inappropriate because the operator used a lot of glue. Then, by using cause and effect diagram, the root cause of the problem was identified and solutions to overcome the problem were proposed. There are three suggestions proposed to overcome this problem. Cost reduction for each solution was calculated and the best solution is using three hair drier to dry the sticky glue which produce only 6.4 defects in an hour with cost of RM 0.0224.

  16. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    USGS Publications Warehouse

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  17. Solving Connected Subgraph Problems in Wildlife Conservation

    NASA Astrophysics Data System (ADS)

    Dilkina, Bistra; Gomes, Carla P.

    We investigate mathematical formulations and solution techniques for a variant of the Connected Subgraph Problem. Given a connected graph with costs and profits associated with the nodes, the goal is to find a connected subgraph that contains a subset of distinguished vertices. In this work we focus on the budget-constrained version, where we maximize the total profit of the nodes in the subgraph subject to a budget constraint on the total cost. We propose several mixed-integer formulations for enforcing the subgraph connectivity requirement, which plays a key role in the combinatorial structure of the problem. We show that a new formulation based on subtour elimination constraints is more effective at capturing the combinatorial structure of the problem, providing significant advantages over the previously considered encoding which was based on a single commodity flow. We test our formulations on synthetic instances as well as on real-world instances of an important problem in environmental conservation concerning the design of wildlife corridors. Our encoding results in a much tighter LP relaxation, and more importantly, it results in finding better integer feasible solutions as well as much better upper bounds on the objective (often proving optimality or within less than 1% of optimality), both when considering the synthetic instances as well as the real-world wildlife corridor instances.

  18. 2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation

    NASA Astrophysics Data System (ADS)

    Proctor, Camron Lisle

    The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.

  19. Aeroelastic optimization methodology for viscous and turbulent flows

    NASA Astrophysics Data System (ADS)

    Barcelos Junior, Manuel Nascimento Dias

    2007-12-01

    In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.

  20. Development and application of traffic flow information collecting and analysis system based on multi-type video

    NASA Astrophysics Data System (ADS)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  1. Haemodynamics of giant cerebral aneurysm: A comparison between the rigid-wall, one-way and two-way FSI models

    NASA Astrophysics Data System (ADS)

    Khe, A. K.; Cherevko, A. A.; Chupakhin, A. P.; Bobkova, M. S.; Krivoshapkin, A. L.; Orlov, K. Yu

    2016-06-01

    In this paper a computer simulation of a blood flow in cerebral vessels with a giant saccular aneurysm at the bifurcation of the basilar artery is performed. The modelling is based on patient-specific clinical data (both flow domain geometry and boundary conditions for the inlets and outlets). The hydrodynamic and mechanical parameters are calculated in the frameworks of three models: rigid-wall assumption, one-way FSI approach, and full (two-way) hydroelastic model. A comparison of the numerical solutions shows that mutual fluid- solid interaction can result in qualitative changes in the structure of the fluid flow. Other characteristics of the flow (pressure, stress, strain and displacement) qualitatively agree with each other in different approaches. However, the quantitative comparison shows that accounting for the flow-vessel interaction, in general, decreases the absolute values of these parameters. Solving of the hydroelasticity problem gives a more detailed solution at a cost of highly increased computational time.

  2. Stability Analysis of Algebraic Reconstruction for Immersed Boundary Methods with Application in Flow and Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yousefzadeh, M.; Battiato, I.

    2017-12-01

    Flow and reactive transport problems in porous media often involve complex geometries with stationary or evolving boundaries due to absorption and dissolution processes. Grid based methods (e.g. finite volume, finite element, etc.) are a vital tool for studying these problems. Yet, implementing these methods requires one to answer a very first question of what type of grid is to be used. Among different possible answers, Cartesian grids are one of the most attractive options as they possess simple discretization stencil and are usually straightforward to generate at roughly no computational cost. The Immersed Boundary Method, a Cartesian based methodology, maintains most of the useful features of the structured grids while exhibiting a high-level resilience in dealing with complex geometries. These features make it increasingly more attractive to model transport in evolving porous media as the cost of grid generation reduces greatly. Yet, stability issues and severe time-step restriction due to explicit-time implementation combined with limited studies on the implementation of Neumann (constant flux) and linear and non-linear Robin (e.g. reaction) boundary conditions (BCs) have significantly limited the applicability of IBMs to transport in porous media. We have developed an implicit IBM capable of handling all types of BCs and addressed some numerical issues, including unconditional stability criteria, compactness and reduction of spurious oscillations near the immersed boundary. We tested the method for several transport and flow scenarios, including dissolution processes in porous media, and demonstrate its capabilities. Successful validation against both experimental and numerical data has been carried out.

  3. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne E.

    2013-01-01

    We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

  4. Parallel processing in a host plus multiple array processor system for radar

    NASA Technical Reports Server (NTRS)

    Barkan, B. Z.

    1983-01-01

    Host plus multiple array processor architecture is demonstrated to yield a modular, fast, and cost-effective system for radar processing. Software methodology for programming such a system is developed. Parallel processing with pipelined data flow among the host, array processors, and discs is implemented. Theoretical analysis of performance is made and experimentally verified. The broad class of problems to which the architecture and methodology can be applied is indicated.

  5. Diablo 2.0: A modern DNS/LES code for the incompressible NSE leveraging new time-stepping and multigrid algorithms

    NASA Astrophysics Data System (ADS)

    Cavaglieri, Daniele; Bewley, Thomas; Mashayek, Ali

    2015-11-01

    We present a new code, Diablo 2.0, for the simulation of the incompressible NSE in channel and duct flows with strong grid stretching near walls. The code leverages the fractional step approach with a few twists. New low-storage IMEX (implicit-explicit) Runge-Kutta time-marching schemes are tested which are superior to the traditional and widely-used CN/RKW3 (Crank-Nicolson/Runge-Kutta-Wray) approach; the new schemes tested are L-stable in their implicit component, and offer improved overall order of accuracy and stability with, remarkably, similar computational cost and storage requirements. For duct flow simulations, our new code also introduces a new smoother for the multigrid solver for the pressure Poisson equation. The classic approach, involving alternating-direction zebra relaxation, is replaced by a new scheme, dubbed tweed relaxation, which achieves the same convergence rate with roughly half the computational cost. The code is then tested on the simulation of a shear flow instability in a duct, a classic problem in fluid mechanics which has been the object of extensive numerical modelling for its role as a canonical pathway to energetic turbulence in several fields of science and engineering.

  6. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  7. A priori and a posteriori analysis of the flow around a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Cimarelli, A.; Leonforte, A.; Franciolini, M.; De Angelis, E.; Angeli, D.; Crivellini, A.

    2017-11-01

    The definition of a correct mesh resolution and modelling approach for the Large Eddy Simulation (LES) of the flow around a rectangular cylinder is recognized to be a rather elusive problem as shown by the large scatter of LES results present in the literature. In the present work, we aim at assessing this issue by performing an a priori analysis of Direct Numerical Simulation (DNS) data of the flow. This approach allows us to measure the ability of the LES field on reproducing the main flow features as a function of the resolution employed. Based on these results, we define a mesh resolution which maximize the opposite needs of reducing the computational costs and of adequately resolving the flow dynamics. The effectiveness of the resolution method proposed is then verified by means of an a posteriori analysis of actual LES data obtained by means of the implicit LES approach given by the numerical properties of the Discontinuous Galerkin spatial discretization technique. The present work represents a first step towards a best practice for LES of separating and reattaching flows.

  8. A novel anisotropic fast marching method and its application to blood flow computation in phase-contrast MRI.

    PubMed

    Schwenke, M; Hennemuth, A; Fischer, B; Friman, O

    2012-01-01

    Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.

  9. End-to-End Information System design at the NASA Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Hooke, A. J.

    1978-01-01

    Recognizing a pressing need of the 1980s to optimize the two-way flow of information between a ground-based user and a remote space-based sensor, an end-to-end approach to the design of information systems has been adopted at the Jet Propulsion Laboratory. The objectives of this effort are to ensure that all flight projects adequately cope with information flow problems at an early stage of system design, and that cost-effective, multi-mission capabilities are developed when capital investments are made in supporting elements. The paper reviews the End-to-End Information System (EEIS) activity at the Laboratory, and notes the ties to the NASA End-to-End Data System program.

  10. CFD - Mature Technology?

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2005-01-01

    Over the past 30 years, numerical methods and simulation tools for fluid dynamic problems have advanced as a new discipline, namely, computational fluid dynamics (CFD). Although a wide spectrum of flow regimes are encountered in many areas of science and engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to a large demand for predicting the aerodynamic performance characteristics of flight vehicles, such as commercial, military, and space vehicles. As flow analysis is required to be more accurate and computationally efficient for both commercial and mission-oriented applications (such as those encountered in meteorology, aerospace vehicle development, general fluid engineering and biofluid analysis) CFD tools for engineering become increasingly important for predicting safety, performance and cost. This paper presents the author's perspective on the maturity of CFD, especially from an aerospace engineering point of view.

  11. Optimal Power Flow Pursuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea

    This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less

  12. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  13. Mask cost of ownership for advanced lithography

    NASA Astrophysics Data System (ADS)

    Muzio, Edward G.; Seidel, Philip K.

    2000-07-01

    As technology advances, becoming more difficult and more expensive, the cost of ownership (CoO) metric becomes increasingly important in evaluating technical strategies. The International SEMATECH CoC analysis has steadily gained visibility over the past year, as it attempts to level the playing field between technology choices, and create a fair relative comparison. In order to predict mask cots for advanced lithography, mask process flows are modeled using bets-known processing strategies, equipment cost, and yields. Using a newly revised yield mode, and updated mask manufacture flows, representative mask flows can be built. These flows are then used to calculate mask costs for advanced lithography down to the 50 nm node. It is never the goal of this type of work to provide absolute cost estimates for business planning purposes. However, the combination of a quantifiable yield model with a clearly defined set of mask processing flows and a cost model based upon them serves as an excellent starting point for cost driver analysis and process flow discussion.

  14. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  15. Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System

    NASA Astrophysics Data System (ADS)

    Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng

    2013-11-01

    Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.

  16. Development and validation of chemistry agnostic flow battery cost performance model and application to nonaqueous electrolyte systems: Chemistry agnostic flow battery cost performance model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Alasdair; Thomsen, Edwin; Reed, David

    2016-04-20

    A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less

  17. Development of a High-Order Space-Time Matrix-Free Adjoint Solver

    NASA Technical Reports Server (NTRS)

    Ceze, Marco A.; Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    The growth in computational power and algorithm development in the past few decades has granted the science and engineering community the ability to simulate flows over complex geometries, thus making Computational Fluid Dynamics (CFD) tools indispensable in analysis and design. Currently, one of the pacing items limiting the utility of CFD for general problems is the prediction of unsteady turbulent ows.1{3 Reynolds-averaged Navier-Stokes (RANS) methods, which predict a time-invariant mean flowfield, struggle to provide consistent predictions when encountering even mild separation, such as the side-of-body separation at a wing-body junction. NASA's Transformative Tools and Technologies project is developing both numerical methods and physical modeling approaches to improve the prediction of separated flows. A major focus of this e ort is efficient methods for resolving the unsteady fluctuations occurring in these flows to provide valuable engineering data of the time-accurate flow field for buffet analysis, vortex shedding, etc. This approach encompasses unsteady RANS (URANS), large-eddy simulations (LES), and hybrid LES-RANS approaches such as Detached Eddy Simulations (DES). These unsteady approaches are inherently more expensive than traditional engineering RANS approaches, hence every e ort to mitigate this cost must be leveraged. Arguably, the most cost-effective approach to improve the efficiency of unsteady methods is the optimal placement of the spatial and temporal degrees of freedom (DOF) using solution-adaptive methods.

  18. Acoustic Communications Considerations for Collaborative Simultaneous Localization and Mapping

    DTIC Science & Technology

    2014-12-01

    combat. The United States experienced this problem first hand on April 14, 1988, when the USS Samuel B. Roberts (FFG 58) struck a mine in the Persian...Strait of Hormuz where the USS Samuel B. Roberts was operating. The low cost and advancing technology of naval mines makes them particularly well...access and ensure the free flow of maritime trade in the global commons. The present means of mine countermeasures largely reside on surface ships and

  19. Stream habitat analysis using the instream flow incremental methodology

    USGS Publications Warehouse

    Bovee, Ken D.; Lamb, Berton L.; Bartholow, John M.; Stalnaker, Clair B.; Taylor, Jonathan; Henriksen, Jim

    1998-01-01

    This document describes the Instream Flow Methodology in its entirety. This also is to serve as a comprehensive introductory textbook on IFIM for training courses as it contains the most complete and comprehensive description of IFIM in existence today. This should also serve as an official guide to IFIM in publication to counteract the misconceptions about the methodology that have pervaded the professional literature since the mid-1980's as this describes IFIM as it is envisioned by its developers. The document is aimed at the decisionmakers of management and allocation of natural resources in providing them an overview; and to those who design and implement studies to inform the decisionmakers. There should be enough background on model concepts, data requirements, calibration techniques, and quality assurance to help the technical user design and implement a cost-effective application of IFIM that will provide policy-relevant information. Some of the chapters deal with basic organization of IFIM, procedural sequence of applying IFIM starting with problem identification, study planning and implementation, and problem resolution.

  20. Large-eddy simulations of turbulent flow for grid-to-rod fretting in nuclear reactors

    DOE PAGES

    Bakosi, J.; Christon, M. A.; Lowrie, R. B.; ...

    2013-07-12

    The grid-to-rod fretting (GTRF) problem in pressurized water reactors is a flow-induced vibration problem that results in wear and failure of the fuel rods in nuclear assemblies. In order to understand the fluid dynamics of GTRF and to build an archival database of turbulence statistics for various configurations, implicit large-eddy simulations of time-dependent single-phase turbulent flow have been performed in 3 × 3 and 5 × 5 rod bundles with a single grid spacer. To assess the computational mesh and resolution requirements, a method for quantitative assessment of unstructured meshes with no-slip walls is described. The calculations have been carriedmore » out using Hydra-TH, a thermal-hydraulics code developed at Los Alamos for the Consortium for Advanced Simulation of Light water reactors, a United States Department of Energy Innovation Hub. Hydra-TH uses a second-order implicit incremental projection method to solve the singlephase incompressible Navier-Stokes equations. The simulations explicitly resolve the large scale motions of the turbulent flow field using first principles and rely on a monotonicity-preserving numerical technique to represent the unresolved scales. Each series of simulations for the 3 × 3 and 5 × 5 rod-bundle geometries is an analysis of the flow field statistics combined with a mesh-refinement study and validation with available experimental data. Our primary focus is the time history and statistics of the forces loading the fuel rods. These hydrodynamic forces are believed to be the key player resulting in rod vibration and GTRF wear, one of the leading causes for leaking nuclear fuel which costs power utilities millions of dollars in preventive measures. As a result, we demonstrate that implicit large-eddy simulation of rod-bundle flows is a viable way to calculate the excitation forces for the GTRF problem.« less

  1. Cost and performance model for redox flow batteries

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2014-02-01

    A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.

  2. Scalability and performance of data-parallel pressure-based multigrid methods for viscous flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blosch, E.L.; Shyy, W.

    1996-05-01

    A full-approximation storage multigrid method for solving the steady-state 2-d incompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5, using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns, allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 x 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable andmore » that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the course grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320 x 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature. 62 refs., 13 figs.« less

  3. Scalability and Performance of Data-Parallel Pressure-Based Multigrid Methods for Viscous Flows

    NASA Astrophysics Data System (ADS)

    Blosch, Edwin L.; Shyy, Wei

    1996-05-01

    A full-approximation storage multigrid method for solving the steady-state 2-dincompressible Navier-Stokes equations on staggered grids has been implemented in Fortran on the CM-5,using the array aliasing feature in CM-Fortran to avoid declaring fine-grid-sized arrays on all levels while still allowing a variable number of grid levels. Thus, the storage cost scales with the number of unknowns,allowing us to consider significantly larger problems than would otherwise be possible. Timings over a range of problem sizes and numbers of processors, up to 4096 × 4096 on 512 nodes, show that the smoothing procedure, a pressure-correction technique, is scalable and that the restriction and prolongation steps are nearly so. The performance obtained for the multigrid method is 333 Mflops out of the theoretical peak 4 Gflops on a 32-node CM-5. In comparison, a single-grid computation obtained 420 Mflops. The decrease is due to the inefficiency of the smoothing iterations on the coarse grid levels. W cycles cost much more and are much less efficient than V cycles, due to the increased contribution from the coarse grids. The convergence rate characteristics of the pressure-correction multigrid method are investigated in a Re = 5000 lid-driven cavity flow and a Re = 300 symmetric backward-facing step flow, using either a defect-correction scheme or a second-order upwind scheme. A heuristic technique relating the convergence tolerances for the coarse grids to the truncation error of the discretization has been found effective and robust. With second-order upwinding on all grid levels, a 5-level 320× 80 step flow solution was obtained in 20 V cycles, which corresponds to a smoothing rate of 0.7, and required 25 s on a 32-node CM-5. Overall, the convergence rates obtained in the present work are comparable to the most competitive findings reported in the literature.

  4. Flow and Noise Control: Review and Assessment of Future Directions

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Choudhari, Meelan M.; Joslin, Ronald D.

    2002-01-01

    Technologies for developing radically new aerovehicles that would combine quantum leaps in cost, safety, and performance benefits with environmental friendliness have appeared on the horizon. This report provides both an assessment of the current state-of-the-art in flow and noise control and a vision for the potential gains to be made, in terms of performance benefit for civil and military aircraft and a unique potential for noise reduction, via future advances in flow and noise technologies. This report outlines specific areas of research that will enable the breakthroughs necessary to bring this vision to reality. Recent developments in many topics within flow and noise control are reviewed. The flow control overview provides succinct summaries of various approaches for drag reduction and improved maneuvering. Both exterior and interior noise problems are examined, including dominant noise sources, physics of noise generation and propagation, and both established and proposed concepts for noise reduction. Synergy between flow and noise control is a focus and, more broadly, the need to pursue research in a more concurrent approach involving multiple disciplines. Also discussed are emerging technologies such as nanotechnology that may have a significant impact on the progress of flow and noise control.

  5. Reduced-order modellin for high-pressure transient flow of hydrogen-natural gas mixture

    NASA Astrophysics Data System (ADS)

    Agaie, Baba G.; Khan, Ilyas; Alshomrani, Ali Saleh; Alqahtani, Aisha M.

    2017-05-01

    In this paper the transient flow of hydrogen compressed-natural gas (HCNG) mixture which is also referred to as hydrogen-natural gas mixture in a pipeline is numerically computed using the reduced-order modelling technique. The study on transient conditions is important because the pipeline flows are normally in the unsteady state due to the sudden opening and closure of control valves, but most of the existing studies only analyse the flow in the steady-state conditions. The mathematical model consists in a set of non-linear conservation forms of partial differential equations. The objective of this paper is to improve the accuracy in the prediction of the HCNG transient flow parameters using the Reduced-Order Modelling (ROM). The ROM technique has been successfully used in single-gas and aerodynamic flow problems, the gas mixture has not been done using the ROM. The study is based on the velocity change created by the operation of the valves upstream and downstream the pipeline. Results on the flow characteristics, namely the pressure, density, celerity and mass flux are based on variations of the mixing ratio and valve reaction and actuation time; the ROM computational time cost advantage are also presented.

  6. A Finite Layer Formulation for Groundwater Flow to Horizontal Wells.

    PubMed

    Xu, Jin; Wang, Xudong

    2016-09-01

    A finite layer approach for the general problem of three-dimensional (3D) flow to horizontal wells in multilayered aquifer systems is presented, in which the unconfined flow can be taken into account. The flow is approximated by an integration of the standard finite element method in vertical direction and the analytical techniques in the other spatial directions. Because only the vertical discretization is involved, the horizontal wells can be completely contained in one specific nodal plane without discretization. Moreover, due to the analytical eigenfunctions introduced in the formulation, the weighted residual equations can be decoupled, and the formulas for the global matrices and flow vector corresponding to horizontal wells can be obtained explicitly. Consequently, the bandwidth of the global matrices and computational cost rising from 3D analysis can be significantly reduced. Two comparisons to the existing solutions are made to verify the validity of the formulation, including transient flow to horizontal wells in confined and unconfined aquifers. Furthermore, an additional numerical application to horizontal wells in three-layered systems is presented to demonstrate the applicability of the present method in modeling flow in more complex aquifer systems. © 2016, National Ground Water Association.

  7. Operationally Efficient Propulsion System Study (OEPSS) data book. Volume 1: Generic ground operations data

    NASA Technical Reports Server (NTRS)

    Byrd, Raymond J.

    1990-01-01

    This study was initiated to identify operations problems and cost drivers for current propulsion systems and to identify technology and design approaches to increase the operational efficiency and reduce operations costs for future propulsion systems. To provide readily usable data for the Advance Launch System (ALS) program, the results of the Operationally Efficient Propulsion System Study (OEPSS) were organized into a series of OEPSS Data Books as follows: Volume 1, Generic Ground Operations Data; Volume 2, Ground Operations Problems; Volume 3, Operations Technology; Volume 4, OEPSS Design Concepts; and Volume 5, OEPSS Final Review Briefing, which summarizes the activities and results of the study. This volume presents ground processing data for a generic LOX/LH2 booster and core propulsion system based on current STS experience. The data presented includes: top logic diagram, process flow, activities bar-chart, loaded timelines, manpower requirements in terms of duration, headcount and skill mix per operations and maintenance instruction (OMI), and critical path tasks and durations.

  8. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  9. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    NASA Astrophysics Data System (ADS)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  10. The assembly and use of continuous flow systems for chemical synthesis.

    PubMed

    Britton, Joshua; Jamison, Timothy F

    2017-11-01

    The adoption of and opportunities in continuous flow synthesis ('flow chemistry') have increased significantly over the past several years. Continuous flow systems provide improved reaction safety and accelerated reaction kinetics, and have synthesised several active pharmaceutical ingredients in automated reconfigurable systems. Although continuous flow platforms are commercially available, systems constructed 'in-lab' provide researchers with a flexible, versatile, and cost-effective alternative. Herein, we describe the assembly and use of a modular continuous flow apparatus from readily available and affordable parts in as little as 30 min. Once assembled, the synthesis of a sulfonamide by reacting 4-chlorobenzenesulfonyl chloride with dibenzylamine in a single reactor coil with an in-line quench is presented. This example reaction offers the opportunity to learn several important skills including reactor construction, charging of a back-pressure regulator, assembly of stainless-steel syringes, assembly of a continuous flow system with multiple junctions, and yield determination. From our extensive experience of single-step and multistep continuous flow synthesis, we also describe solutions to commonly encountered technical problems such as precipitation of solids ('clogging') and reactor failure. Following this protocol, a nonspecialist can assemble a continuous flow system from reactor coils, syringes, pumps, in-line liquid-liquid separators, drying columns, back-pressure regulators, static mixers, and packed-bed reactors.

  11. Low-complexity stochastic modeling of wall-bounded shear flows

    NASA Astrophysics Data System (ADS)

    Zare, Armin

    Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.

  12. Multi-agent systems design for aerospace applications

    NASA Astrophysics Data System (ADS)

    Waslander, Steven L.

    2007-12-01

    Engineering systems with independent decision makers are becoming increasingly prevalent and present many challenges in coordinating actions to achieve systems goals. In particular, this work investigates the applications of air traffic flow control and autonomous vehicles as motivation to define algorithms that allow agents to agree to safe, efficient and equitable solutions in a distributed manner. To ensure system requirements will be satisfied in practice, each method is evaluated for a specific model of agent behavior, be it cooperative or non-cooperative. The air traffic flow control problem is investigated from the point of view of the airlines, whose costs are directly affected by resource allocation decisions made by the Federal Aviation Administration in order to mitigate traffic disruptions caused by weather. Airlines are first modeled as cooperative, and a distributed algorithm is presented with various global cost metrics which balance efficient and equitable use of resources differently. Next, a competitive airline model is assumed and two market mechanisms are developed for allocating contested airspace resources. The resource market mechanism provides a solution for which convergence to an efficient solution can be guaranteed, and each airline will improve on the solution that would occur without its inclusion in the decision process. A lump-sum market is then introduced as an alternative mechanism, for which efficiency loss bounds exist if airlines attempt to manipulate prices. Initial convergence results for lump-sum markets are presented for simplified problems with a single resource. To validate these algorithms, two air traffic flow models are developed which extend previous techniques, the first a convenient convex model made possible by assuming constant velocity flow, and the second a more complex flow model with full inflow, velocity and rerouting control. Autonomous vehicle teams are envisaged for many applications including mobile sensing and search and rescue. To enable these high-level applications, multi-vehicle collision avoidance is solved using a cooperative, decentralized algorithm. For the development of coordination algorithms for autonomous vehicles, the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) is presented. This testbed provides significant advantages over other aerial testbeds due to its small size and low maintenance requirements.

  13. An efficient immersed boundary-lattice Boltzmann method for the hydrodynamic interaction of elastic filaments

    PubMed Central

    Tian, Fang-Bao; Luo, Haoxiang; Zhu, Luoding; Liao, James C.; Lu, Xi-Yun

    2012-01-01

    We have introduced a modified penalty approach into the flow-structure interaction solver that combines an immersed boundary method (IBM) and a multi-block lattice Boltzmann method (LBM) to model an incompressible flow and elastic boundaries with finite mass. The effect of the solid structure is handled by the IBM in which the stress exerted by the structure on the fluid is spread onto the collocated grid points near the boundary. The fluid motion is obtained by solving the discrete lattice Boltzmann equation. The inertial force of the thin solid structure is incorporated by connecting this structure through virtual springs to a ghost structure with the equivalent mass. This treatment ameliorates the numerical instability issue encountered in this type of problems. Thanks to the superior efficiency of the IBM and LBM, the overall method is extremely fast for a class of flow-structure interaction problems where details of flow patterns need to be resolved. Numerical examples, including those involving multiple solid bodies, are presented to verify the method and illustrate its efficiency. As an application of the present method, an elastic filament flapping in the Kármán gait and the entrainment regions near a cylinder is studied to model fish swimming in these regions. Significant drag reduction is found for the filament, and the result is consistent with the metabolic cost measured experimentally for the live fish. PMID:23564971

  14. An efficient immersed boundary-lattice Boltzmann method for the hydrodynamic interaction of elastic filaments

    NASA Astrophysics Data System (ADS)

    Tian, Fang-Bao; Luo, Haoxiang; Zhu, Luoding; Liao, James C.; Lu, Xi-Yun

    2011-08-01

    We have introduced a modified penalty approach into the flow-structure interaction solver that combines an immersed boundary method (IBM) and a multi-block lattice Boltzmann method (LBM) to model an incompressible flow and elastic boundaries with finite mass. The effect of the solid structure is handled by the IBM in which the stress exerted by the structure on the fluid is spread onto the collocated grid points near the boundary. The fluid motion is obtained by solving the discrete lattice Boltzmann equation. The inertial force of the thin solid structure is incorporated by connecting this structure through virtual springs to a ghost structure with the equivalent mass. This treatment ameliorates the numerical instability issue encountered in this type of problems. Thanks to the superior efficiency of the IBM and LBM, the overall method is extremely fast for a class of flow-structure interaction problems where details of flow patterns need to be resolved. Numerical examples, including those involving multiple solid bodies, are presented to verify the method and illustrate its efficiency. As an application of the present method, an elastic filament flapping in the Kármán gait and the entrainment regions near a cylinder is studied to model fish swimming in these regions. Significant drag reduction is found for the filament, and the result is consistent with the metabolic cost measured experimentally for the live fish.

  15. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  16. Field-sensitivity To Rheological Parameters

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2017-11-01

    We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.

  17. Design and setup of intermittent-flow respirometry system for aquatic organisms.

    PubMed

    Svendsen, M B S; Bushnell, P G; Steffensen, J F

    2016-01-01

    Intermittent-flow respirometry is an experimental protocol for measuring oxygen consumption in aquatic organisms that utilizes the best features of closed (stop-flow) and flow-through respirometry while eliminating (or at least reducing) some of their inherent problems. By interspersing short periods of closed-chamber oxygen consumption measurements with regular flush periods, accurate oxygen uptake rate measurements can be made without the accumulation of waste products, particularly carbon dioxide, which may confound results. Automating the procedure with easily available hardware and software further reduces error by allowing many measurements to be made over long periods thereby minimizing animal stress due to acclimation issues. This paper describes some of the fundamental principles that need to be considered when designing and carrying out automated intermittent-flow respirometry (e.g. chamber size, flush rate, flush time, chamber mixing, measurement periods and temperature control). Finally, recent advances in oxygen probe technology and open source automation software will be discussed in the context of assembling relatively low cost and reliable measurement systems. © 2015 The Fisheries Society of the British Isles.

  18. The design of water markets when instream flows have value.

    PubMed

    Murphy, James J; Dinar, Ariel; Howitt, Richard E; Rassenti, Stephen J; Smith, Vernon L; Weinberg, Marca

    2009-02-01

    The main objective of this paper is to design and test a decentralized exchange mechanism that generates the location-specific pricing necessary to achieve efficient allocations in the presence of instream flow values. Although a market-oriented approach has the potential to improve upon traditional command and control regulations, questions remain about how these rights-based institutions can be implemented such that the potential gains from liberalized trade can be realized. This article uses laboratory experiments to test three different water market institutions designed to incorporate instream flow values into the allocation mechanism through active participation of an environmental trader. The smart, computer-coordinated market described herein offers the potential to significantly reduce coordination problems and transaction costs associated with finding mutually beneficial trades that satisfy environmental constraints. We find that direct environmental participation in the market can achieve highly efficient and stable outcomes, although the potential does exist for the environmental agent to influence outcomes.

  19. Discrete adjoint of fractional step Navier-Stokes solver in generalized coordinates

    NASA Astrophysics Data System (ADS)

    Wang, Mengze; Mons, Vincent; Zaki, Tamer

    2017-11-01

    Optimization and control in transitional and turbulent flows require evaluation of gradients of the flow state with respect to the problem parameters. Using adjoint approaches, these high-dimensional gradients can be evaluated with a similar computational cost as the forward Navier-Stokes simulations. The adjoint algorithm can be obtained by discretizing the continuous adjoint Navier-Stokes equations or by deriving the adjoint to the discretized Navier-Stokes equations directly. The latter algorithm is necessary when the forward-adjoint relations must be satisfied to machine precision. In this work, our forward model is the fractional step solution to the Navier-Stokes equations in generalized coordinates, proposed by Rosenfeld, Kwak & Vinokur. We derive the corresponding discrete adjoint equations. We also demonstrate the accuracy of the combined forward-adjoint model, and its application to unsteady wall-bounded flows. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542).

  20. A quiet flow Ludwieg tube for study of transition in compressible boundary layers: Design and feasibility

    NASA Technical Reports Server (NTRS)

    Schneider, Steven P.

    1991-01-01

    Laminar-turbulent transition in high speed boundary layers is a complicated problem which is still poorly understood, partly because of experimental ambiguities caused by operating in noisy wind tunnels. The NASA Langley experience with quiet tunnel design has been used to design a quiet flow tunnel which can be constructed less expensively. Fabrication techniques have been investigated, and inviscid, boundary layer, and stability computer codes have been adapted for use in the nozzle design. Construction of such a facility seems feasible, at a reasonable cost. Two facilities have been proposed: a large one, with a quiet flow region large enough to study the end of transition, and a smaller and less expensive one, capable of studying low Reynolds number issues such as receptivity. Funding for either facility remains to be obtained, although key facility elements have been obtained and are being integrated into the existing Purdue supersonic facilities.

  1. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  2. Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Constantinescu, Emil M.

    2016-06-23

    Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less

  3. Improved Analytical Sensitivity of Lateral Flow Assay using Sponge for HBV Nucleic Acid Detection.

    PubMed

    Tang, Ruihua; Yang, Hui; Gong, Yan; Liu, Zhi; Li, XiuJun; Wen, Ting; Qu, ZhiGuo; Zhang, Sufeng; Mei, Qibing; Xu, Feng

    2017-05-02

    Hepatitis B virus (HBV) infection is a serious public health problem, which can be transmitted through various routes (e.g., blood donation) and cause hepatitis, liver cirrhosis and liver cancer. Hence, it is necessary to do diagnostic screening for high-risk HBV patients in these transmission routes. Nowadays, protein-based technologies have been used for HBV testing, which however involve the issues of large sample volume, antibody instability and poor specificity. Nucleic acid hybridization-based lateral flow assay (LFA) holds great potential to address these limitations due to its low-cost, rapid, and simple features, but the poor analytical sensitivity of LFA restricts its application. In this study, we developed a low-cost, simple and easy-to-use method to improve analytical sensitivity by integrating sponge shunt into LFA to decrease the fluid flow rate. The thickness, length and hydrophobicity of the sponge shunt were sequentially optimized, and achieved 10-fold signal enhancement in nucleic acid testing of HBV as compared to the unmodified LFA. The enhancement was further confirmed by using HBV clinical samples, where we achieved the detection limit of 10 3 copies/ml as compared to 10 4 copies/ml in unmodified LFA. The improved LFA holds great potential for diseases diagnostics, food safety control and environment monitoring at point-of-care.

  4. Chexal-Horowitz flow-accelerated corrosion model -- Parameters and influences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chexal, V.K.; Horowitz, J.S.

    1995-12-01

    Flow-accelerated corrosion (FAC) continues to cause problems in nuclear and fossil power plants. Thinning caused by FAC has lead to many leaks and complete ruptures. These failures have required costly repairs and occasionally have caused lengthy shutdowns. To deal with FAC, utilities have instituted costly inspection and piping replacement programs. Typically, a nuclear unit will inspect about 100 large bore piping components plus additional small bore components during every refueling outage. To cope with FAC, there has been a great deal of research and development performed to obtain a greater understanding of the phenomenon. Currently, there is general agreement onmore » the mechanism of FAC. This understanding has lead to the development of computer based tools to assist utility engineers in dealing with this issue. In the United States, the most commonly used computer program to predict and control is CHECWORKS{trademark}. This paper presents a description of the mechanism of FAC, and introduces the predictive algorithms used in CHECWORKS{trademark}. The parametric effects of water chemistry, materials, flow and geometry as predicted by CHECWORKS{trademark} will then be discussed. These trends will be described and explained by reference to the corrosion mechanism. The remedial actions possible to reduce the rate of damage caused by FAC will also be discussed.« less

  5. Demonstration of a tool for automatic learning and re-use of knowledge in the activated sludge process.

    PubMed

    Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.

  6. A cost-effective methodology for the design of massively-parallel VLSI functional units

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Sriram, G.; Desouza, J.

    1993-01-01

    In this paper we propose a generalized methodology for the design of cost-effective massively-parallel VLSI Functional Units. This methodology is based on a technique of generating and reducing a massive bit-array on the mask-programmable PAcube VLSI array. This methodology unifies (maintains identical data flow and control) the execution of complex arithmetic functions on PAcube arrays. It is highly regular, expandable and uniform with respect to problem-size and wordlength, thereby reducing the communication complexity. The memory-functional unit interface is regular and expandable. Using this technique functional units of dedicated processors can be mask-programmed on the naked PAcube arrays, reducing the turn-around time. The production cost of such dedicated processors can be drastically reduced since the naked PAcube arrays can be mass-produced. Analysis of the the performance of functional units designed by our method yields promising results.

  7. Multiobjective evolutionary optimization of water distribution systems: Exploiting diversity with infeasible solutions.

    PubMed

    Tanyimboh, Tiku T; Seyoum, Alemtsehay G

    2016-12-01

    This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. A Discounted Cash Flow variant to detect the optimal amount of additional burdens in Public-Private Partnership transactions.

    PubMed

    Copiello, Sergio

    2016-01-01

    The Discounted Cash Flow method is a long since well-known tool to assess the feasibility of investment projects, as the background which shapes a broad range of techniques, from the Cost-Benefit Analysis up to the Life-Cycle Cost Analysis. Its rationale lies in the comparison of deferred values, only once they have been discounted back to the present. The DCF variant proposed here fits into a specific application field. It is well-suited to the evaluations required in order to structure equitable transactions under the umbrella of Public-Private Partnership. •The discount rate relies upon the concept of expected return on equity, instead than on those of weighted average cost of capital, although the latter is the most common reference within the scope of real estate investment valuation.•Given a feasible project, whose Net Present Value is more than satisfactory, we aim to identify the amount of the additional burdens that could be charged to the project, under the condition of keeping the same economically viable.•The DCF variant essentially deals with an optimization problem, which can be solved by means of simple one-shot equations, derived from financial mathematics, or through iterative calculations if additional constraints must be considered.

  9. Joint brain connectivity estimation from diffusion and functional MRI data

    NASA Astrophysics Data System (ADS)

    Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.

    2015-03-01

    Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information flow is introduced and used to model the propagation of information between GM areas through WM fiber bundles. The link capacity, i.e., ability to transfer information, is characterized by the relative strength of fiber bundles, e.g., fiber count gathered from the tractography of dMRI data. The node information demand is considered to be proportional to the correlation between neural activity at various cortical areas involved in a particular functional mode (e.g. visual, motor, etc.). These two properties lead to the link capacity and node demand constraints in the proposed model. Moreover, the information flow of a link cannot exceed the demand from either end node. This is captured by the feasibility constraints. Two different cost functions are considered in the optimization formulation in this paper. The first cost function, the reciprocal of fiber strength represents the unit cost for information passing through the link. In the second cost function, a min-max (minimizing the maximal link load) approach is used to balance the usage of each link. Optimizing the first cost function selects the pathway with strongest fiber strength for information propagation. In the second case, the optimization procedure finds all the possible propagation pathways and allocates the flow proportionally to their strength. Additionally, a penalty term is incorporated with both the cost functions to capture the possible missing and weak anatomical connections. With this set of constraints and the proposed cost functions, solving the network optimization problem recovers missing and weak anatomical connections supported by the functional information and provides the functional-associated anatomical subnetworks. Feasibility is demonstrated using realistic diffusion and functional MRI phantom data. It is shown that the proposed model recovers the maximum number of true connections, with fewest number of false connections when compared with the connectivity derived from a joint probabilistic model using the expectation-maximization (EM) algorithm presented in a prior work. We also apply the proposed method to data provided by the Human Connectome Project (HCP).

  10. Dual-Use Partnership Addresses Performance Problems with "Y" Pattern Control Valves

    NASA Technical Reports Server (NTRS)

    Bailey, John W.

    2004-01-01

    A Dual-Use Cooperative Agreement between the Propulsion Test Directorate (PTD) at Stennis Space Center (SSC) and Oceaneering Reflange, Inc. of Houston, TX has produced an improved 'Y' pattern split-body control valve for use in the propulsion test facilities at Stennis Space Center. The split-body, or clamped bonnet technology, provides for a 'cleaner' valve design featuring enhanced performance and increased flow capacity with extended life expectancy. Other points addressed by the partnership include size, weight and costs. Overall size and weight of each valve will be reduced by 50% compared to valves currently in use at SSC. An initial procurement of two 10 inch valves will result in an overall cost reduction of 15% or approximately $50,000 per valve.

  11. Construction and Application of a Refined Hospital Management Chain.

    PubMed

    Lihua, Yi

    2016-01-01

    Large scale development was quite common in the later period of hospital industrialization in China. Today, Chinese hospital management faces such problems as service inefficiency, high human resources cost, and low rate of capital use. This study analyzes the refined management chain of Wuxi No.2 People's Hospital. This consists of six gears namely, "organizational structure, clinical practice, outpatient service, medical technology, and nursing care and logistics." The gears are based on "flat management system targets, chief of medical staff, centralized outpatient service, intensified medical examinations, vertical nursing management and socialized logistics." The core concepts of refined hospital management are optimizing flow process, reducing waste, improving efficiency, saving costs, and taking good care of patients as most important. Keywords: Hospital, Refined, Management chain

  12. Finite element analysis in fluids; Proceedings of the Seventh International Conference on Finite Element Methods in Flow Problems, University of Alabama, Huntsville, Apr. 3-7, 1989

    NASA Technical Reports Server (NTRS)

    Chung, T. J. (Editor); Karr, Gerald R. (Editor)

    1989-01-01

    Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.

  13. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2015-10-01

    The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.

  14. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  15. Economic considerations in the use of inhaled anesthetic agents.

    PubMed

    Golembiewski, Julie

    2010-04-15

    To describe the components of and factors contributing to the costs of inhaled anesthesia, basis for quantifying and comparing these costs, and practical strategies for performing pharmacoeconomic analyses and reducing the costs of inhaled anesthetic agents. Inhaled anesthesia can be costly, and some of the variable costs, including fresh gas flow rates and vaporizer settings, are potential targets for cost savings. The use of a low fresh gas flow rate maximizes rebreathing of exhaled anesthetic gas and is less costly than a high flow rate, but it provides less control of the level of anesthesia. The minimum alveolar concentration (MAC) hour is a measure that can be used to compare the cost of inhaled anesthetic agents at various fresh gas flow rates. Anesthesia records provide a sense of patterns of inhaled anesthetic agent use, but the amount of detail can be limited. Cost savings have resulted from efforts to reduce the direct costs of inhaled anesthetic agents, but reductions in indirect costs through shortened times to patient recovery and discharge following the judicious use of these agents are more difficult to demonstrate. The patient case mix, fresh gas flow rates typically used during inhaled anesthesia, availability and location of vaporizers, and anesthesia care provider preferences and practices should be taken into consideration in pharmacoeconomic evaluations and recommendations for controlling the costs of inhaled anesthesia. Understanding factors that contribute to the costs of inhaled anesthesia and considering those factors in pharmacoeconomic analyses and recommendations for use of these agents can result in cost savings.

  16. Optimizing basin-scale coupled water quantity and water quality man-agement with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Engelund Holm, Peter; Trapp, Stefan; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2015-04-01

    Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen concentrations. Inelastic water demands, fixed water allocation curtailment costs and fixed wastewater treatment costs (before and after use) are estimated for the water users (agriculture, industry and domestic). If the BOD concentration exceeds a given user pollution thresh-old, the user will need to pay for pre-treatment of the water before use. Similarly, treatment of the return flow can reduce the BOD load to the river. A traditional SDP approach is used to solve one-step-ahead sub-problems for all combinations of discrete reservoir storage, Markov Chain inflow clas-ses and monthly time steps. Pollution concentration nodes are introduced for each user group and untreated return flow from the users contribute to increased BOD concentrations in the river. The pollutant concentrations in each node depend on multiple decision variables (allocation and wastewater treatment) rendering the objective function non-linear. Therefore, the pollution concen-tration decisions are outsourced to a genetic algorithm, which calls a linear program to determine the remainder of the decision variables. This hybrid formulation keeps the optimization problem computationally feasible and represents a flexible and customizable method. The method has been applied to the Ziya River basin, an economic hotspot located on the North China Plain in Northern China. The basin is subject to severe water scarcity, and the rivers are heavily polluted with wastewater and nutrients from diffuse sources. The coupled hydro-economic optimiza-tion model can be used to assess costs of meeting additional constraints such as minimum water qual-ity or to economically prioritize investments in waste water treatment facilities based on economic criteria.

  17. Quantifying uncertainty and computational complexity for pore-scale simulations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  18. Medical-device risk management and public safety: using cost-benefit as a measurement of effectiveness

    NASA Astrophysics Data System (ADS)

    Hughes, Allen A.

    1994-12-01

    Public safety can be enhanced through the development of a comprehensive medical device risk management. This can be accomplished through case studies using a framework that incorporates cost-benefit analysis in the evaluation of risk management attributes. This paper presents a framework for evaluating the risk management system for regulatory Class III medical devices. The framework consists of the following sixteen attributes of a comprehensive medical device risk management system: fault/failure analysis, premarket testing/clinical trials, post-approval studies, manufacturer sponsored hospital studies, product labeling, establishment inspections, problem reporting program, mandatory hospital reporting, medical literature surveillance, device/patient registries, device performance monitoring, returned product analysis, autopsy program, emergency treatment funds/interim compensation, product liability, and alternative compensation mechanisms. Review of performance histories for several medical devices can reveal the value of information for many attributes, and also the inter-dependencies of the attributes in generating risk information flow. Such an information flow network is presented as a starting point for enhancing medical device risk management by focusing on attributes with high net benefit values and potential to spur information dissemination.

  19. A numerical method for simulating the dynamics of 3D axisymmetric vesicles suspended in viscous flows

    NASA Astrophysics Data System (ADS)

    Veerapaneni, Shravan K.; Gueyffier, Denis; Biros, George; Zorin, Denis

    2009-10-01

    We extend [Shravan K. Veerapaneni, Denis Gueyffier, Denis Zorin, George Biros, A boundary integral method for simulating the dynamics of inextensible vesicles suspended in a viscous fluid in 2D, Journal of Computational Physics 228(7) (2009) 2334-2353] to the case of three-dimensional axisymmetric vesicles of spherical or toroidal topology immersed in viscous flows. Although the main components of the algorithm are similar in spirit to the 2D case—spectral approximation in space, semi-implicit time-stepping scheme—the main differences are that the bending and viscous force require new analysis, the linearization for the semi-implicit schemes must be rederived, a fully implicit scheme must be used for the toroidal topology to eliminate a CFL-type restriction and a novel numerical scheme for the evaluation of the 3D Stokes single layer potential on an axisymmetric surface is necessary to speed up the calculations. By introducing these novel components, we obtain a time-scheme that experimentally is unconditionally stable, has low cost per time step, and is third-order accurate in time. We present numerical results to analyze the cost and convergence rates of the scheme. To verify the solver, we compare it to a constrained variational approach to compute equilibrium shapes that does not involve interactions with a viscous fluid. To illustrate the applicability of method, we consider a few vesicle-flow interaction problems: the sedimentation of a vesicle, interactions of one and three vesicles with a background Poiseuille flow.

  20. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  1. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  2. A Sarsa(λ)-based control model for real-time traffic light coordination.

    PubMed

    Zhou, Xiaoke; Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei

    2014-01-01

    Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.

  3. Component-cost and performance based comparison of flow and static batteries

    NASA Astrophysics Data System (ADS)

    Hopkins, Brandon J.; Smith, Kyle C.; Slocum, Alexander H.; Chiang, Yet-Ming

    2015-10-01

    Flow batteries are a promising grid-storage technology that is scalable, inherently flexible in power/energy ratio, and potentially low cost in comparison to conventional or ;static; battery architectures. Recent advances in flow chemistries are enabling significantly higher energy density flow electrodes. When the same battery chemistry can arguably be used in either a flow or static electrode design, the relative merits of either design choice become of interest. Here, we analyze the costs of the electrochemically active stack for both architectures under the constraint of constant energy efficiency and charge and discharge rates, using as case studies the aqueous vanadium-redox chemistry, widely used in conventional flow batteries, and aqueous lithium-iron-phosphate (LFP)/lithium-titanium-phosphate (LTP) suspensions, an example of a higher energy density suspension-based electrode. It is found that although flow batteries always have a cost advantage (kWh-1) at the stack level modeled, the advantage is a strong function of flow electrode energy density. For the LFP/LTP case, the cost advantages decreases from ∼50% to ∼10% over experimentally reasonable ranges of suspension loading. Such results are important input for design choices when both battery architectures are viable options.

  4. Estimating the system price of redox flow batteries for grid storage

    NASA Astrophysics Data System (ADS)

    Ha, Seungbum; Gallagher, Kevin G.

    2015-11-01

    Low-cost energy storage systems are required to support extensive deployment of intermittent renewable energy on the electricity grid. Redox flow batteries have potential advantages to meet the stringent cost target for grid applications as compared to more traditional batteries based on an enclosed architecture. However, the manufacturing process and therefore potential high-volume production price of redox flow batteries is largely unquantified. We present a comprehensive assessment of a prospective production process for aqueous all vanadium flow battery and nonaqueous lithium polysulfide flow battery. The estimated investment and variable costs are translated to fixed expenses, profit, and warranty as a function of production volume. When compared to lithium-ion batteries, redox flow batteries are estimated to exhibit lower costs of manufacture, here calculated as the unit price less materials costs, owing to their simpler reactor (cell) design, lower required area, and thus simpler manufacturing process. Redox flow batteries are also projected to achieve the majority of manufacturing scale benefits at lower production volumes as compared to lithium-ion. However, this advantage is offset due to the dramatically lower present production volume of flow batteries compared to competitive technologies such as lithium-ion.

  5. Corrective Control to Handle Forecast Uncertainty: A Chance Constrained Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roald, Line; Misra, Sidhant; Krause, Thilo

    Higher shares of electricity generation from renewable energy sources and market liberalization is increasing uncertainty in power systems operation. At the same time, operation is becoming more flexible with improved control systems and new technology such as phase shifting transformers (PSTs) and high voltage direct current connections (HVDC). Previous studies have shown that the use of corrective control in response to outages contributes to a reduction in operating cost, while maintaining N-1 security. In this work, we propose a method to extend the use of corrective control of PSTs and HVDCs to react to uncertainty. We characterize the uncertainty asmore » continuous random variables, and define the corrective control actions through affine control policies. This allows us to efficiently model control reactions to a large number of uncertainty sources. The control policies are then included in a chance constrained optimal power flow formulation, which guarantees that the system constraints are enforced with a desired probability. Lastly, by applying an analytical reformulation of the chance constraints, we obtain a second-order cone problem for which we develop an efficient solution algorithm. In a case study for the IEEE 118 bus system, we show that corrective control for uncertainty leads to a decrease in operational cost, while maintaining system security. Further, we demonstrate the scalability of the method by solving the problem for the IEEE 300 bus and the Polish system test cases.« less

  6. Optimizing model: insemination, replacement, seasonal production, and cash flow.

    PubMed

    DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A

    1992-03-01

    Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.

  7. Corrective Control to Handle Forecast Uncertainty: A Chance Constrained Optimal Power Flow

    DOE PAGES

    Roald, Line; Misra, Sidhant; Krause, Thilo; ...

    2016-08-25

    Higher shares of electricity generation from renewable energy sources and market liberalization is increasing uncertainty in power systems operation. At the same time, operation is becoming more flexible with improved control systems and new technology such as phase shifting transformers (PSTs) and high voltage direct current connections (HVDC). Previous studies have shown that the use of corrective control in response to outages contributes to a reduction in operating cost, while maintaining N-1 security. In this work, we propose a method to extend the use of corrective control of PSTs and HVDCs to react to uncertainty. We characterize the uncertainty asmore » continuous random variables, and define the corrective control actions through affine control policies. This allows us to efficiently model control reactions to a large number of uncertainty sources. The control policies are then included in a chance constrained optimal power flow formulation, which guarantees that the system constraints are enforced with a desired probability. Lastly, by applying an analytical reformulation of the chance constraints, we obtain a second-order cone problem for which we develop an efficient solution algorithm. In a case study for the IEEE 118 bus system, we show that corrective control for uncertainty leads to a decrease in operational cost, while maintaining system security. Further, we demonstrate the scalability of the method by solving the problem for the IEEE 300 bus and the Polish system test cases.« less

  8. GPU accelerated study of heat transfer and fluid flow by lattice Boltzmann method on CUDA

    NASA Astrophysics Data System (ADS)

    Ren, Qinlong

    Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1,000 US dollars. The release of NVIDIA's CUDA architecture which includes both hardware and programming environment in 2007 makes GPU computing attractive. Due to its highly parallel nature, lattice Boltzmann method is successfully ported into GPU with a performance benefit during the recent years. In the current work, LBM CUDA code is developed for different fluid flow and heat transfer problems. In this dissertation, lattice Boltzmann method and immersed boundary method are used to study natural convection in an enclosure with an array of conduting obstacles, double-diffusive convection in a vertical cavity with Soret and Dufour effects, PCM melting process in a latent heat thermal energy storage system with internal fins, mixed convection in a lid-driven cavity with a sinusoidal cylinder, and AC electrothermal pumping in microfluidic systems on a CUDA computational platform. It is demonstrated that LBM is an efficient method to simulate complex heat transfer problems using GPU on CUDA.

  9. Performance of a low cost interdigitated flow design on a 1 kW class all vanadium mixed acid redox flow battery

    NASA Astrophysics Data System (ADS)

    Reed, David; Thomsen, Edwin; Li, Bin; Wang, Wei; Nie, Zimin; Koeppel, Brian; Sprenkle, Vincent

    2016-02-01

    Three flow designs were operated in a 3-cell 1 kW class all vanadium mixed acid redox flow battery. The influence of electrode surface area and flow rate on the coulombic, voltage, and energy efficiency and the pressure drop in the flow circuit will be discussed and correlated to the flow design. Material cost associated with each flow design will also be discussed.

  10. Water Conservation Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Metzger, Jesse Dean

    2010-12-31

    This software requires inputs of simple water fixture inventory information and calculates the water/energy and cost benefits of various retrofit opportunities. This tool includes water conservation measures for: Low-flow Toilets, Low-flow Urinals, Low-flow Faucets, and Low-flow Showheads. This tool calculates water savings, energy savings, demand reduction, cost savings, and building life cycle costs including: simple payback, discounted payback, net-present value, and savings to investment ratio. In addition this tool also displays the environmental benefits of a project.

  11. Pulsating electrolyte flow in a full vanadium redox battery

    NASA Astrophysics Data System (ADS)

    Ling, C. Y.; Cao, H.; Chng, M. L.; Han, M.; Birgersson, E.

    2015-10-01

    Proper management of electrolyte flow in a vanadium redox battery (VRB) is crucial to achieve high overall system efficiency. On one hand, constant flow reduces concentration polarization and by extension, energy efficiency; on the other hand, it results in higher auxiliary pumping costs, which can consume around 10% of the discharge power. This work seeks to reduce the pumping cost by adopting a novel pulsing electrolyte flow strategy while retaining high energy efficiency. The results indicate that adopting a short flow period, followed by a long flow termination period, results in high energy efficiencies of 80.5% with a pumping cost reduction of over 50%.

  12. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  13. Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness

    NASA Astrophysics Data System (ADS)

    Julich, R. J.

    2004-05-01

    The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.

  14. Cost-Effectiveness Analysis of Nasal Continuous Positive Airway Pressure Versus Nasal High Flow Therapy as Primary Support for Infants Born Preterm.

    PubMed

    Huang, Li; Roberts, Calum T; Manley, Brett J; Owen, Louise S; Davis, Peter G; Dalziel, Kim M

    2018-05-01

    To compare the cost-effectiveness of 2 common "noninvasive" modes of respiratory support for infants born preterm. An economic evaluation was conducted as a component of a multicenter, randomized control trial from 2013 to 2015 enrolling infants born preterm at ≥28 weeks of gestation with respiratory distress, <24 hours old, who had not previously received endotracheal intubation and mechanical ventilation or surfactant. The economic evaluation was conducted from a healthcare sector perspective and the time horizon was from birth until death or first discharge. The cost-effectiveness of continuous positive airway pressure (CPAP) vs high-flow with "rescue" CPAP backup and high-flow without rescue CPAP backup (as sole primary support) were analyzed by using the hospital cost of inpatient stay in a tertiary center and the rates of endotracheal intubation and mechanical ventilation during admission. Hospital inpatient cost records for 435 infants enrolled in all Australian centers were obtained. With "rescue" CPAP backup, an incremental cost-effectiveness ratio was estimated of A$179 000 (US$123 000) per ventilation avoided if CPAP was used compared with high flow. Without rescue CPAP backup, cost per ventilation avoided was A$7000 (US$4800) if CPAP was used compared with high flow. As sole primary support, CPAP is highly likely to be cost-effective compared with high flow. Neonatal units choosing to use only one device should apply CPAP as primary respiratory support. Compared with high-flow with rescue CPAP backup, CPAP is unlikely to be cost-effective if willingness to pay per ventilation avoided is less than A$179 000 (US$123 000). Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Application of Multi-Objective Human Learning Optimization Method to Solve AC/DC Multi-Objective Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Yan, Zheng; He, Guangyu

    2016-06-01

    This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.

  16. Multi-objective optimization of solid waste flows: environmentally sustainable strategies for municipalities.

    PubMed

    Minciardi, Riccardo; Paolucci, Massimo; Robba, Michela; Sacile, Roberto

    2008-11-01

    An approach to sustainable municipal solid waste (MSW) management is presented, with the aim of supporting the decision on the optimal flows of solid waste sent to landfill, recycling and different types of treatment plants, whose sizes are also decision variables. This problem is modeled with a non-linear, multi-objective formulation. Specifically, four objectives to be minimized have been taken into account, which are related to economic costs, unrecycled waste, sanitary landfill disposal and environmental impact (incinerator emissions). An interactive reference point procedure has been developed to support decision making; these methods are considered appropriate for multi-objective decision problems in environmental applications. In addition, interactive methods are generally preferred by decision makers as they can be directly involved in the various steps of the decision process. Some results deriving from the application of the proposed procedure are presented. The application of the procedure is exemplified by considering the interaction with two different decision makers who are assumed to be in charge of planning the MSW system in the municipality of Genova (Italy).

  17. Model of skin friction enhancement in undulatory swimming

    NASA Astrophysics Data System (ADS)

    Ehrenstein, Uwe; Eloy, Christophe

    2012-11-01

    To estimate the energetic cost of undulatory swimming, it is crucial to evaluate the drag forces originating from skin friction. This topic has been controversial for decades, some claiming that animals use ingenious mechanisms to reduce the drag and others hypothesizing that the undulatory motion induces a drag increase because of the compression of the boundary layers. In this paper, we examine this latter hypothesis, known as the ``Bone-Lighthill boundary-layer thinning hypothesis''. Considering a plate of section s moving perpendicular to itself at velocity U⊥ and applying the boundary-layer approximation for the incoming flow, the drag force per unit surface is shown to scale as √{U⊥ / s }. An analogous two-dimensional Navier-Stokes problem by artificially accelerating the flow in a channel of finite height is solved numerically, showing the robustness of the analytical results. Solving the problem for an undulatory plate motion similar to fish swimming, we find a drag enhancement which can be estimated to be of the order of 20 to 100%, depending on the geometry and the motion. M.J. Lighthill, Proc. R. Soc. Lond. B 179, 125 (1971).

  18. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  19. Impact of traffic composition on accessibility as indicator of transport sustainability

    NASA Astrophysics Data System (ADS)

    Nahdalina; Hadiwardoyo, S. P.; Nahry

    2017-05-01

    Sustainable transport is closely related to quality of life in the community at present and in the future. Some indicators of transport sustainability are accessibility measurement of origin/destination, the operating costs of transport (vehicle operating cost or VOC) and external transportation costs (emission cost). The indicators could be combined into accessibility measurement model. In other case, almost traffic congestion occurred on the condition of mixed traffic. This paper aimed to analyse the indicator of transport sustainability through simulation under condition of various traffic composition. Various composition of truck to total traffic flow are 0%, 10% and 20%. Speed and V/C are calculated from traffic flow to estimate the VOC and emission cost. 5 VOC components and 3 types of emission cost (CO2, CH4 and N2O) are counted to be a travel cost. Accessibility measurement was calculated using travel cost and gravity model approaches. Result of the research shows that the total traffic flow has indirect impact on accessibility measurement if using travel cost approach. Meanwhile, the composition of traffic flow has an affect on accessibility measurement if using gravity model approach.

  20. Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.

  1. Information flows in hierarchical networks and the capability of organizations to successfully respond to failures, crises, and disasters

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Ammoser, Hendrik; Kühnert, Christian

    2006-04-01

    In this paper we discuss the problem of information losses in organizations and how they depend on the organization network structure. Hierarchical networks are an optimal organization structure only when the failure rate of nodes or links is negligible. Otherwise, redundant information links are important to reduce the risk of information losses and the related costs. However, as redundant information links are expensive, the optimal organization structure is not a fully connected one. It rather depends on the failure rate. We suggest that sidelinks and temporary, adaptive shortcuts can improve the information flows considerably by generating small-world effects. This calls for modified organization structures to cope with today's challenges of businesses and administrations, in particular, to successfully respond to crises or disasters.

  2. Texture mapping via optimal mass transport.

    PubMed

    Dominitz, Ayelet; Tannenbaum, Allen

    2010-01-01

    In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.

  3. Toward a reality-based understanding of Hadza men's work: a response to Hawkes et al. (2014).

    PubMed

    Wood, Brian M; Marlowe, Frank W

    2014-12-01

    Observations of Hadza men foraging out of camp and sharing food in camp show that men seeking to maximize the flow of calories to their families should pursue large game, and that hunting large game does not pose a collective action problem. These data also show that Hadza men frequently pursued honey, small game, and fruit, and that by doing so, provided a more regular flow of food to their households than would a putative big game specialist. These data support our earlier studies demonstrating that the goal of family provisioning is a robust predictor of Hadza men's behavior. As before, the show-off and costly signaling hypotheses advanced by Hawkes and colleagues fail as both descriptions of and explanations for Hadza men's work.

  4. AWAS: A dynamic work scheduling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y.; Hao, J.; Kocur, G.

    1994-12-31

    The Automated Work Administration System (AWAS) is an automated scheduling system developed at GTE. A typical work center has 1000 employees and processes 4000 jobs each day. Jobs are geographically distributed within the service area of the work center, require different skills, and have to be done within specified time windows. Each job can take anywhere from 12 minutes to several hours to complete. Each employee can have his/her individual schedule, skill, or working area. The jobs can enter and leave the system at any time The employees dial up to the system to request for their next job atmore » the beginning of a day or after a job is done. The system is able to respond to the changes dynamically and produce close to optimum solutions at real time. We formulate the real world problem as a minimum cost network flow problem. Both employees and jobs are formulated as nodes. Relationship between jobs and employees are formulated as arcs, and working hours contributed by employees and consumed by jobs are formulated as flow. The goal is to minimize missed commitments. We solve the problem with the successive shortest path algorithm. Combined with pre-processing and post-processing, the system produces reasonable outputs and the response time is very good.« less

  5. Vortex generator design for aircraft inlet distortion as a numerical optimization problem

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Levy, Ralph

    1991-01-01

    Aerodynamic compatibility of aircraft/inlet/engine systems is a difficult design problem for aircraft that must operate in many different flight regimes. Takeoff, subsonic cruise, supersonic cruise, transonic maneuvering, and high altitude loiter each place different constraints on inlet design. Vortex generators, small wing like sections mounted on the inside surfaces of the inlet duct, are used to control flow separation and engine face distortion. The design of vortex generator installations in an inlet is defined as a problem addressable by numerical optimization techniques. A performance parameter is suggested to account for both inlet distortion and total pressure loss at a series of design flight conditions. The resulting optimization problem is difficult since some of the design parameters take on integer values. If numerical procedures could be used to reduce multimillion dollar development test programs to a small set of verification tests, numerical optimization could have a significant impact on both cost and elapsed time to design new aircraft.

  6. Numerical formulation for the prediction of solid/liquid change of a binary alloy

    NASA Technical Reports Server (NTRS)

    Schneider, G. E.; Tiwari, S. N.

    1990-01-01

    A computational model is presented for the prediction of solid/liquid phase change energy transport including the influence of free convection fluid flow in the liquid phase region. The computational model considers the velocity components of all non-liquid phase change material control volumes to be zero but fully solves the coupled mass-momentum problem within the liquid region. The thermal energy model includes the entire domain and uses an enthalpy like model and a recently developed method for handling the phase change interface nonlinearity. Convergence studies are performed and comparisons made with experimental data for two different problem specifications. The convergence studies indicate that grid independence was achieved and the comparison with experimental data indicates excellent quantitative prediction of the melt fraction evolution. Qualitative data is also provided in the form of velocity vector diagrams and isotherm plots for selected times in the evolution of both problems. The computational costs incurred are quite low by comparison with previous efforts on solving these problems.

  7. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE PAGES

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef; ...

    2016-10-01

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  8. Large-eddy simulation, fuel rod vibration and grid-to-rod fretting in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christon, Mark A.; Lu, Roger; Bakosi, Jozsef

    Grid-to-rod fretting (GTRF) in pressurized water reactors is a flow-induced vibration phenomenon that results in wear and fretting of the cladding material on fuel rods. GTRF is responsible for over 70% of the fuel failures in pressurized water reactors in the United States. Predicting the GTRF wear and concomitant interval between failures is important because of the large costs associated with reactor shutdown and replacement of fuel rod assemblies. The GTRF-induced wear process involves turbulent flow, mechanical vibration, tribology, and time-varying irradiated material properties in complex fuel assembly geometries. This paper presents a new approach for predicting GTRF induced fuelmore » rod wear that uses high-resolution implicit large-eddy simulation to drive nonlinear transient dynamics computations. The GTRF fluid–structure problem is separated into the simulation of the turbulent flow field in the complex-geometry fuel-rod bundles using implicit large-eddy simulation, the calculation of statistics of the resulting fluctuating structural forces, and the nonlinear transient dynamics analysis of the fuel rod. Ultimately, the methods developed here, can be used, in conjunction with operational management, to improve reactor core designs in which fuel rod failures are minimized or potentially eliminated. Furthermore, robustness of the behavior of both the structural forces computed from the turbulent flow simulations and the results from the transient dynamics analyses highlight the progress made towards achieving a predictive simulation capability for the GTRF problem.« less

  9. Development of a High-Order Navier-Stokes Solver Using Flux Reconstruction to Simulate Three-Dimensional Vortex Structures in a Curved Artery Model

    NASA Astrophysics Data System (ADS)

    Cox, Christopher

    Low-order numerical methods are widespread in academic solvers and ubiquitous in industrial solvers due to their robustness and usability. High-order methods are less robust and more complicated to implement; however, they exhibit low numerical dissipation and have the potential to improve the accuracy of flow simulations at a lower computational cost when compared to low-order methods. This motivates our development of a high-order compact method using Huynh's flux reconstruction scheme for solving unsteady incompressible flow on unstructured grids. We use Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. In 2D, an implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation. The high-order solver is extended to 3D and parallelized using MPI. Due to its simplicity, time marching for 3D problems is done explicitly. The feasibility of using the current implicit time stepping scheme for large scale three-dimensional problems with high-order polynomial basis still remains to be seen. We directly use the aforementioned numerical solver to simulate pulsatile flow of a Newtonian blood-analog fluid through a rigid 180-degree curved artery model. One of the most physiologically relevant forces within the cardiovascular system is the wall shear stress. This force is important because atherosclerotic regions are strongly correlated with curvature and branching in the human vasculature, where the shear stress is both oscillatory and multidirectional. Also, the combined effect of curvature and pulsatility in cardiovascular flows produces unsteady vortices. The aim of this research as it relates to cardiovascular fluid dynamics is to predict the spatial and temporal evolution of vortical structures generated by secondary flows, as well as to assess the correlation between multiple vortex pairs and wall shear stress. We use a physiologically (pulsatile) relevant flow rate and generate results using both fully developed and uniform entrance conditions, the latter being motivated by the fact that flow upstream of a curved artery may not have sufficient straight entrance length to become fully developed. Under the two pulsatile inflow conditions, we characterize the morphology and evolution of various vortex pairs and their subsequent effect on relevant haemodynamic wall shear stress metrics.

  10. Application of Harmony Search algorithm to the solution of groundwater management models

    NASA Astrophysics Data System (ADS)

    Tamer Ayvaz, M.

    2009-06-01

    This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.

  11. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  12. IUS/TUG orbital operations and mission support study. Volume 3: Space tug operations

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A study was conducted to develop space tug operational concepts and baseline operations plan, and to provide cost estimates for space tug operations. Background data and study results are presented along with a transition phase analysis (the transition from interim upper state to tug operations). A summary is given of the tug operational and interface requirements with emphasis on the on-orbit checkout requirements, external interface operational requirements, safety requirements, and system operational interface requirements. Other topics discussed include reference missions baselined for the tug and details for the mission functional flows and timelines derived for the tug mission, tug subsystems, tug on-orbit operations prior to the tug first burn, spacecraft deployment and retrieval by the tug, operations centers, mission planning, potential problem areas, and cost data.

  13. Time-independent lattice Boltzmann method calculation of hydrodynamic interactions between two particles

    NASA Astrophysics Data System (ADS)

    Ding, E. J.

    2015-06-01

    The time-independent lattice Boltzmann algorithm (TILBA) is developed to calculate the hydrodynamic interactions between two particles in a Stokes flow. The TILBA is distinguished from the traditional lattice Boltzmann method in that a background matrix (BGM) is generated prior to the calculation. The BGM, once prepared, can be reused for calculations for different scenarios, and the computational cost for each such calculation will be significantly reduced. The advantage of the TILBA is that it is easy to code and can be applied to any particle shape without complicated implementation, and the computational cost is independent of the shape of the particle. The TILBA is validated and shown to be accurate by comparing calculation results obtained from the TILBA to analytical or numerical solutions for certain problems.

  14. Market fallacies in health economics.

    PubMed

    Grant, R J

    1991-12-07

    Serious methodological errors that plague studies in health economics are examined with the focus on misconceptions about the nature and functions of markets. The belief that market economics do not apply to the medical marketplace involves circular reasoning that treats man-made laws and regulations as though they were unchangeable laws of nature. Arguments against the market provision of health care are questioned and the 'information gap' problem is shown to be aggravated, if not caused, by regulations that prevent normal information flows in the market. Similarly, the contention that health care insurance pushes up costs is criticised on the basis of both theory and empirical evidence. The apparent failure to contain costs may be blamed on legal restrictions, government spending and pressure from medical associations. Confusion between normative theory and positive theory is also examined.

  15. Selecting quantitative water management measures at the river basin scale in a global change context

    NASA Astrophysics Data System (ADS)

    Girard, Corentin; Rinaudo, Jean-Daniel; Caballero, Yvan; Pulido-Velazquez, Manuel

    2013-04-01

    One of the main challenges in the implementation of the Water Framework Directive (WFD) in the European Union is the definition of programme of measures to reach the good status of the European water bodies. In areas where water scarcity is an issue, one of these challenges is the selection of water conservation and capacity expansion measures to ensure minimum environmental in-stream flow requirements. At the same time, the WFD calls for the use of economic analysis to identify the most cost-effective combination of measures at the river basin scale to achieve its objective. With this respect, hydro-economic river basin models, by integrating economics, environmental and hydrological aspects at the river basin scale in a consistent framework, represent a promising approach. This article presents a least-cost river basin optimization model (LCRBOM) that selects the combination of quantitative water management measures to meet environmental flows for future scenarios of agricultural and urban demand taken into account the impact of the climate change. The model has been implemented in a case study on a Mediterranean basin in the south of France, the Orb River basin. The water basin has been identified as in need for quantitative water management measures in order to reach the good status of its water bodies. The LCRBOM has been developed using GAMS, applying Mixed Integer Linear Programming. It is run to select the set of measures that minimizes the total annualized cost of the applied measures, while meeting the demands and minimum in-stream flow constraints. For the economic analysis, the programme of measures is composed of water conservation measures on agricultural and urban water demands. It compares them with measures mobilizing new water resources coming from groundwater, inter-basin transfers and improvement in reservoir operating rules. The total annual cost of each measure is calculated for each demand unit considering operation, maintenance and investment costs. The results show that by combining quantitative water management measures, the flow regime can be improved to better mimic the natural flow regime. However, the acceptability of the higher cost of the program of measures is not yet assessed. Other stages such as stakeholder participation and negotiation processes are as well required to design an acceptable programme of measures. For this purpose, this type of model opens the path to investigate the problems of equity issues, and measures and costs allocation among the stakeholders of the basin. Acknowledgments: The study has been partially supported by the Hérault General Council, the Languedoc-Rousillon Regional Council, the Rhône Mediterranean Corsica Water Agency and the BRGM, as well as the European Community 7th Framework Project GENESIS (n. 226536) on groundwater systems, and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (subprojects CGL2009-13238-C02-01 and CGL2009-13238-C02-02).

  16. Allocation of Transaction Cost to Market Participants Using an Analytical Method in Deregulated Market

    NASA Astrophysics Data System (ADS)

    Jeyasankari, S.; Jeslin Drusila Nesamalar, J.; Charles Raja, S.; Venkatesh, P.

    2014-04-01

    Transmission cost allocation is one of the major challenges in transmission open access faced by the electric power sector. The purpose of this work is to provide an analytical method for allocating transmission transaction cost in deregulated market. This research work provides a usage based transaction cost allocation method based on line-flow impact factor (LIF) which relates the power flow in each line with respect to transacted power for the given transaction. This method provides the impact of line flows without running iterative power flow solution and is well suited for real time applications. The proposed method is compared with the Newton-Raphson (NR) method of cost allocation on sample six bus and practical Indian utility 69 bus systems by considering multilateral transaction.

  17. Optimized Lateral Flow Immunoassay Reader for the Detection of Infectious Diseases in Developing Countries.

    PubMed

    Pilavaki, Evdokia; Demosthenous, Andreas

    2017-11-20

    Detection and control of infectious diseases is a major problem, especially in developing countries. Lateral flow immunoassays can be used with great success for the detection of infectious diseases. However, for the quantification of their results an electronic reader is required. This paper presents an optimized handheld electronic reader for developing countries. It features a potentially low-cost, low-power, battery-operated device with no added optical accessories. The operation of this proof of concept device is based on measuring the reflected light from the lateral flow immunoassay and translating it into the concentration of the specific analyte of interest. Characterization of the surface of the lateral flow immunoassay has been performed in order to accurately model its response to the incident light. Ray trace simulations have been performed to optimize the system and achieve maximum sensitivity by placing all the components in optimum positions. A microcontroller enables all the signal processing to be performed on the device and a Bluetooth module allows transmission of the results wirelessly to a mobile phone app. Its performance has been validated using lateral flow immunoassays with influenza A nucleoprotein in the concentration range of 0.5 ng/mL to 200 ng/mL.

  18. Ultrasonic Mastering of Filter Flow and Antifouling of Renewable Resources.

    PubMed

    Radziuk, Darya; Möhwald, Helmuth

    2016-04-04

    Inadequate access to pure water and sanitation requires new cost-effective, ergonomic methods with less consumption of energy and chemicals, leaving the environment cleaner and sustainable. Among such methods, ultrasound is a unique means to control the physics and chemistry of complex fluids (wastewater) with excellent performance regarding mass transfer, cleaning, and disinfection. In membrane filtration processes, it overcomes diffusion limits and can accelerate the fluid flow towards the filter preventing antifouling. Here, we outline the current state of knowledge and technological design, with a focus on physicochemical strategies of ultrasound for water cleaning. We highlight important parameters of ultrasound for the delivery of a fluid flow from a technical perspective employing principles of physics and chemistry. By introducing various ultrasonic methods, involving bubbles or cavitation in combination with external fields, we show advancements in flow acceleration and mass transportation to the filter. In most cases we emphasize the main role of streaming and the impact of cavitation with a perspective to prevent and remove fouling deposits during the flow. We also elaborate on the deficiencies of present technologies and on problems to be solved to achieve a wide-spread application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Aeroacoustic directivity via wave-packet analysis of mean or base flows

    NASA Astrophysics Data System (ADS)

    Edstrand, Adam; Schmid, Peter; Cattafesta, Louis

    2017-11-01

    Noise pollution is an ever-increasing problem in society, and knowledge of the directivity patterns of the sound radiation is required for prediction and control. Directivity is frequently determined through costly numerical simulations of the flow field combined with an acoustic analogy. We introduce a new computationally efficient method of finding directivity for a given mean or base flow field using wave-packet analysis (Trefethen, PRSA 2005). Wave-packet analysis approximates the eigenvalue spectrum with spectral accuracy by modeling the eigenfunctions as wave packets. With the wave packets determined, we then follow the method of Obrist (JFM, 2009), which uses Lighthill's acoustic analogy to determine the far-field sound radiation and directivity of wave-packet modes. We apply this method to a canonical jet flow (Gudmundsson and Colonius, JFM 2011) and determine the directivity of potentially unstable wave packets. Furthermore, we generalize the method to consider a three-dimensional flow field of a trailing vortex wake. In summary, we approximate the disturbances as wave packets and extract the directivity from the wave-packet approximation in a fraction of the time of standard aeroacoustic solvers. ONR Grant N00014-15-1-2403.

  20. High resolution flow field prediction for tail rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.

    1989-01-01

    The prediction of tail rotor noise due to the impingement of the main rotor wake poses a significant challenge to current analysis methods in rotorcraft aeroacoustics. This paper describes the development of a new treatment of the tail rotor aerodynamic environment that permits highly accurate resolution of the incident flow field with modest computational effort relative to alternative models. The new approach incorporates an advanced full-span free wake model of the main rotor in a scheme which reconstructs high-resolution flow solutions from preliminary, computationally inexpensive simulations with coarse resolution. The heart of the approach is a novel method for using local velocity correction terms to capture the steep velocity gradients characteristic of the vortex-dominated incident flow. Sample calculations have been undertaken to examine the principal types of interactions between the tail rotor and the main rotor wake and to examine the performance of the new method. The results of these sample problems confirm the success of this approach in capturing the high-resolution flows necessary for analysis of rotor-wake/rotor interactions with dramatically reduced computational cost. Computations of radiated sound are also carried out that explore the role of various portions of the main rotor wake in generating tail rotor noise.

  1. Visualizing Oil Process Dynamics in Porous Media with Micromodels

    NASA Astrophysics Data System (ADS)

    Biswal, S. L.

    2016-12-01

    The use of foam in enhanced oil recovery (EOR) applications is being considered for gas mobility control to ensure pore-trapped oil can be effectively displaced. In fractured reservoirs, gas tends to channel only through the highly permeability regions, bypassing the less permeable porous matrix, where most of the residual oil remains. Because of the unique transport problems presented by the large permeability contrast between fractures and adjacent porous media, we aim to understand the mechanism by which foam transitions from the fracture to the matrix and how initially trapped oil can be displaced and ultimately recovered. My lab has generated micromodels, which are combined with high-speed imaging to visualize foam transport in models with permeability contrasts, fractures, and multiple phases. The wettability of these surfaces can be altered to mimic the heterogeneous wettability found in reservoir systems. We have shown how foam quality can be modulated by adjusting the ratio of gas flow ratio to aqueous flow rate in a flow focusing system and this foam quality influences sweep efficiency in heterogeneous porous media systems. I will discuss how this understanding has allowed us to design better foam EOR processes. I will also highlight our recent efforts in ashaltene deposition. Asphaltene deposition is a common cause of significant flow assurance problems in wellbores and production equipment as well as near-wellbore regions in oil reservoirs. I will present our results for visualizing real time asphaltene deposition from model and crude oils using microfluidic devices. In particular, we consider porous-media micromodel designs to represent various flow conditions typical of that found in oil flow processes. Also, four stages of deposition have been found and investigated in the pore scale and with qualitatively macroscopic total collector efficiency as well as Hamaker expressions for interacting asphaltenes with surfaces. By understanding the nature and the mechanisms of asphaltene deposits, we increase our ability to design cost effective mitigation strategies that includes the development of a new generation of asphaltene deposition inhibitors and improved methods for prevention and treatment of this problem.

  2. The production route selection algorithm in virtual manufacturing networks

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2017-08-01

    The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.

  3. Conference Proceedings of Machine Intelligence for Aerospace Electronic Systems Held in Lisbon, Portugal on 13-16 May 1991 (L’Intelligence Artificielle dans les Systemes Electroniques Aerospatiaux)

    DTIC Science & Technology

    1991-09-01

    more closely matched description is that of finding the distribution with the nature of the application problem. of current flow through a nonuniform ...multiprocessors, may not be effective finding the propagation of wavefronts through a in solving traditional algorithms for these medium with a nonuniform ...properties of the paths are variable cost function is similar to the noteworthy. First, every heading from the start nonuniform resistivity of the plate

  4. High order filtering methods for approximating hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1991-01-01

    The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.

  5. High-order ENO schemes applied to two- and three-dimensional compressible flow

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley

    1991-01-01

    High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.

  6. The Researches on Cycle-Changeable Generation Settlement Method

    NASA Astrophysics Data System (ADS)

    XU, Jun; LONG, Suyan; LV, Jianhu

    2018-03-01

    Through the analysis of the business characteristics and problems of price adjustment, a cycle-changeable generation settlement method is proposed to support any time cycle settlement, and put forward a complete set of solutions, including the creation of settlement tasks, time power dismantle, generating fixed cycle of electricity, net energy split. At the same time, the overall design flow of cycle-changeable settlement is given. This method supports multiple price adjustments during the month, and also is an effective solution to the cost reduction of month-after price adjustment.

  7. Internet traffic load balancing using dynamic hashing with flow volume

    NASA Astrophysics Data System (ADS)

    Jo, Ju-Yeon; Kim, Yoohwan; Chao, H. Jonathan; Merat, Francis L.

    2002-07-01

    Sending IP packets over multiple parallel links is in extensive use in today's Internet and its use is growing due to its scalability, reliability and cost-effectiveness. To maximize the efficiency of parallel links, load balancing is necessary among the links, but it may cause the problem of packet reordering. Since packet reordering impairs TCP performance, it is important to reduce the amount of reordering. Hashing offers a simple solution to keep the packet order by sending a flow over a unique link, but static hashing does not guarantee an even distribution of the traffic amount among the links, which could lead to packet loss under heavy load. Dynamic hashing offers some degree of load balancing but suffers from load fluctuations and excessive packet reordering. To overcome these shortcomings, we have enhanced the dynamic hashing algorithm to utilize the flow volume information in order to reassign only the appropriate flows. This new method, called dynamic hashing with flow volume (DHFV), eliminates unnecessary flow reassignments of small flows and achieves load balancing very quickly without load fluctuation by accurately predicting the amount of transferred load between the links. In this paper we provide the general framework of DHFV and address the challenges in implementing DHFV. We then introduce two algorithms of DHFV with different flow selection strategies and show their performances through simulation.

  8. Wing kinematics and flexibility for optimal manoeuvring and escape

    NASA Astrophysics Data System (ADS)

    Wong, Jaime Gustav

    Understanding how animals control the dynamic stall vortices in their wake is critical to developing micro-aerial vehicles and autonomous underwater vehicles, not to mention wind turbines, delta wings, and rotor craft that undergo similar dynamic stall processes. Applying this knowledge to biomimetic engineering problems requires progress in three areas: (i) understanding the flow physics of natural swimmers and flyers; (ii) developing flow measurement techniques to resolve this physics; and (iii) deriving low-cost models suitable for studying the vast parameter space observed in nature. This body of work, which consists of five research chapters, focuses on the leading-edge vortex (LEV) that forms on profiles undergoing rapid manoeuvres, delta wings, and similar devices. Lagrangian particle tracking is used throughout this thesis to track the mass and circulation transport in the LEV on manoeuvring profiles. The growth and development of the LEV is studied in relation to: flapping and plunging profile kinematics; spanwise flow from profile sweep and spanwise profile bending; and varying the angle-of-attack gradient along the profile span. Finally, scaling relationships derived from the observations above are used to develop a low-cost model for LEV growth, that is validated on a flat-plate delta wing. Together these results contribute to each of the three topics identified above, as a step towards developing robust, agile biomimetic swimmers and flyers.

  9. Dynamic remedial action scheme using online transient stability analysis

    NASA Astrophysics Data System (ADS)

    Shrestha, Arun

    Economic pressure and environmental factors have forced the modern power systems to operate closer to their stability limits. However, maintaining transient stability is a fundamental requirement for the operation of interconnected power systems. In North America, power systems are planned and operated to withstand the loss of any single or multiple elements without violating North American Electric Reliability Corporation (NERC) system performance criteria. For a contingency resulting in the loss of multiple elements (Category C), emergency transient stability controls may be necessary to stabilize the power system. Emergency control is designed to sense abnormal conditions and subsequently take pre-determined remedial actions to prevent instability. Commonly known as either Remedial Action Schemes (RAS) or as Special/System Protection Schemes (SPS), these emergency control approaches have been extensively adopted by utilities. RAS are designed to address specific problems, e.g. to increase power transfer, to provide reactive support, to address generator instability, to limit thermal overloads, etc. Possible remedial actions include generator tripping, load shedding, capacitor and reactor switching, static VAR control, etc. Among various RAS types, generation shedding is the most effective and widely used emergency control means for maintaining system stability. In this dissertation, an optimal power flow (OPF)-based generation-shedding RAS is proposed. This scheme uses online transient stability calculation and generator cost function to determine appropriate remedial actions. For transient stability calculation, SIngle Machine Equivalent (SIME) technique is used, which reduces the multimachine power system model to a One-Machine Infinite Bus (OMIB) equivalent and identifies critical machines. Unlike conventional RAS, which are designed using offline simulations, online stability calculations make the proposed RAS dynamic and adapting to any power system configuration and operating state. The generation-shedding cost is calculated using pre-RAS and post-RAS OPF costs. The criteria for selecting generators to trip is based on the minimum cost rather than minimum amount of generation to shed. For an unstable Category C contingency, the RAS control action that results in stable system with minimum generation shedding cost is selected among possible candidate solutions. The RAS control actions update whenever there is a change in operating condition, system configuration, or cost functions. The effectiveness of the proposed technique is demonstrated by simulations on the IEEE 9-bus system, the IEEE 39-bus system, and IEEE 145-bus system. This dissertation also proposes an improved, yet relatively simple, technique for solving Transient Stability-Constrained Optimal Power Flow (TSC-OPF) problem. Using the SIME method, the sets of dynamic and transient stability constraints are reduced to a single stability constraint, decreasing the overall size of the optimization problem. The transient stability constraint is formulated using the critical machines' power at the initial time step, rather than using the machine rotor angles. This avoids the addition of machine steady state stator algebraic equations in the conventional OPF algorithm. A systematic approach to reach an optimal solution is developed by exploring the quasi-linear behavior of critical machine power and stability margin. The proposed method shifts critical machines active power based on generator costs using an OPF algorithm. Moreover, the transient stability limit is based on stability margin, and not on a heuristically set limit on OMIB rotor angle. As a result, the proposed TSC-OPF solution is more economical and transparent. The proposed technique enables the use of fast and robust commercial OPF tool and time-domain simulation software for solving large scale TSC-OPF problem, which makes the proposed method also suitable for real-time application.

  10. Cooperative Solutions in Multi-Person Quadratic Decision Problems: Finite-Horizon and State-Feedback Cost-Cumulant Control Paradigm

    DTIC Science & Technology

    2007-01-01

    CONTRACT NUMBER Problems: Finite -Horizon and State-Feedback Cost-Cumulant Control Paradigm (PREPRINT) 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...cooperative cost-cumulant control regime for the class of multi-person single-objective decision problems characterized by quadratic random costs and... finite -horizon integral quadratic cost associated with a linear stochastic system . Since this problem formation is parameterized by the number of cost

  11. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  12. Vanadium Electrolyte Studies for the Vanadium Redox Battery-A Review.

    PubMed

    Skyllas-Kazacos, Maria; Cao, Liuyue; Kazacos, Michael; Kausar, Nadeem; Mousa, Asem

    2016-07-07

    The electrolyte is one of the most important components of the vanadium redox flow battery and its properties will affect cell performance and behavior in addition to the overall battery cost. Vanadium exists in several oxidation states with significantly different half-cell potentials that can produce practical cell voltages. It is thus possible to use the same element in both half-cells and thereby eliminate problems of cross-contamination inherent in all other flow battery chemistries. Electrolyte properties vary with supporting electrolyte composition, state-of-charge, and temperature and this will impact on the characteristics, behavior, and performance of the vanadium battery in practical applications. This Review provides a broad overview of the physical properties and characteristics of the vanadium battery electrolyte under different conditions, together with a description of some of the processing methods that have been developed to produce vanadium electrolytes for vanadium redox flow battery applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  14. What does free cash flow tell us about hospital efficiency? A stochastic frontier analysis of cost inefficiency in California hospitals.

    PubMed

    Pratt, William R

    2010-01-01

    Hospitals are facing substantial financial and economic pressure as a result of health plan payment restructuring, unfunded mandates, and other factors. This article analyzes the relationship between free cash flow (FCF) and hospital efficiency given these financial challenges. Data from 270 California hospitals were used to estimate a stochastic frontier model of hospital cost efficiency that explicitly takes into account outpatient heterogeneity. The findings indicate that hospital FCF is significantly linked to firm efficiency/inefficiency. The results indicate that higher positive cash flows are related to lower cost inefficiency, but higher negative cash flows are related to higher cost inefficiency. Thus, cash flows not only impact the ability of hospitals to meet current liabilities, they are also related to the ability of the hospitals to use resources effectively.

  15. Betweenness centrality and its applications from modeling traffic flows to network community detection

    NASA Astrophysics Data System (ADS)

    Ren, Yihui

    As real-world complex networks are heterogeneous structures, not all their components such as nodes, edges and subgraphs carry the same role or importance in the functions performed by the networks: some elements are more critical than others. Understanding the roles of the components of a network is crucial for understanding the behavior of the network as a whole. One the most basic function of networks is transport; transport of vehicles/people, information, materials, forces, etc., and these quantities are transported along edges between source and destination nodes. For this reason, network path-based importance measures, also called centralities, play a crucial role in the understanding of the transport functions of the network and the network's structural and dynamical behavior in general. In this thesis we study the notion of betweenness centrality, which measures the fraction of lowest-cost (or shortest) paths running through a network component, in particular through a node or an edge. High betweenness centrality nodes/edges are those that will be frequently used by the entities transported through the network and thus they play a key role in the overall transport properties of the network. In the first part of the thesis we present a first-principles based method for traffic prediction using a cost-based generalization of the radiation model (emission/absorbtion model) for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. We then focus on studying the extent of changes in traffic flows in the wake of a localized damage or alteration to the network and we demonstrate that the changes can propagate globally, affecting traffic several hundreds of miles away. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events. In the second part of the thesis we focus on network deconstruction and community detection problems, both intensely studied topics in network science, using a weighted betweenness centrality approach. We present an algorithm that solves both problems efficiently and accurately and demonstrate that on both benchmark networks and data networks.

  16. Smart Screening System (S3) In Taconite Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daryoush Allaei; Ryan Wartman; David Tarnowski

    2006-03-01

    The conventional screening machines used in processing plants have had undesirable high noise and vibration levels. They also have had unsatisfactorily low screening efficiency, high energy consumption, high maintenance cost, low productivity, and poor worker safety. These conventional vibrating machines have been used in almost every processing plant. Most of the current material separation technology uses heavy and inefficient electric motors with an unbalanced rotating mass to generate the shaking. In addition to being excessively noisy, inefficient, and high-maintenance, these vibrating machines are often the bottleneck in the entire process. Furthermore, these motors, along with the vibrating machines and supportingmore » structure, shake other machines and structures in the vicinity. The latter increases maintenance costs while reducing worker health and safety. The conventional vibrating fine screens at taconite processing plants have had the same problems as those listed above. This has resulted in lower screening efficiency, higher energy and maintenance cost, and lower productivity and workers safety concerns. The focus of this work is on the design of a high performance screening machine suitable for taconite processing plants. SmartScreens{trademark} technology uses miniaturized motors, based on smart materials, to generate the shaking. The underlying technologies are Energy Flow Control{trademark} and Vibration Control by Confinement{trademark}. These concepts are used to direct energy flow and confine energy efficiently and effectively to the screen function. The SmartScreens{trademark} technology addresses problems related to noise and vibration, screening efficiency, productivity, and maintenance cost and worker safety. Successful development of SmartScreens{trademark} technology will bring drastic changes to the screening and physical separation industry. The final designs for key components of the SmartScreens{trademark} have been developed. The key components include smart motor and associated electronics, resonators, and supporting structural elements. It is shown that the smart motors have an acceptable life and performance. Resonator (or motion amplifier) designs are selected based on the final system requirement and vibration characteristics. All the components for a fully functional prototype are fabricated. The development program is on schedule. The last semi-annual report described the completion of the design refinement phase. This phase resulted in a Smart Screen design that meets performance targets both in the dry condition and with taconite slurry flow using PZT motors. This system was successfully demonstrated for the DOE and partner companies at the Coleraine Mineral Research Laboratory in Coleraine, Minnesota. Since then, the fabrication of the dry application prototype (incorporating an electromagnetic drive mechanism and a new deblinding concept) has been completed and successfully tested at QRDC's lab.« less

  17. A Radiation Transfer Solver for Athena Using Short Characteristics

    NASA Astrophysics Data System (ADS)

    Davis, Shane W.; Stone, James M.; Jiang, Yan-Fei

    2012-03-01

    We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiation MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.

  18. Minimum specific cost control of technological processes realized in a living objects-containing microenvironment.

    PubMed

    Amelkin, Alexander A; Blagoveschenskaya, Margarita M; Lobanov, Yury V; Amelkin, Anatoly K

    2003-01-01

    The purpose of the present work is to work out an approach for the development of software and the choice of hardware structures when designing subsystems for automatic control of technological processes realized in living objects containing limited space (microenvironment). The subsystems for automatic control of the microenvironment (SACME) under development use the Devices for Air Prophylactic Treatment, Aeroionization, and Purification (DAPTAP) as execution units for increasing the level of safety and quality of agricultural raw material and foodstuffs, for reducing the losses of agricultural produce during storage and cultivation, as well as for intensifying the processes of activation of agricultural produce and industrial microorganisms. A set of interconnected SACMEs works within the framework of a general microenvironmental system (MES). In this research, the population of baker's yeast is chosen as a basic object of control under the industrial fed-batch cultivation in a bubbling bioreactor. This project is an example of a minimum cost automation approach. The microenvironment optimal control problem for baker's yeast cultivation is reduced from a profit maximum to the maximization of overall yield by the reason that the material flow-oriented specific cost correlates closely with the reciprocal value of the overall yield. Implementation of the project partially solves a local sustainability problem and supports a balance of microeconomical, microecological and microsocial systems within a technological subsystem realized in a microenvironment maintaining an optimal value of economical criterion (e.g. minimum material, flow-oriented specific cost) and ensuring: (a) economical growth (profit increase, raw material saving); (b) high security, safety and quality of agricultural raw material during storage process and of food produce during a technological process; elimination of the contact of gaseous harmful substances with a subproduct during various technological stages; (c) improvement of labour conditions for industrial personnel from an ecological point of view (positive effect of air aeroionization and purification on human organism promoting strengthened health and an increase in life duration, pulverent and gaseous chemical and biological impurity removal). An alternative aspect of a controlled living microenvironment forming is considered.

  19. Best Practices for Reduction of Uncertainty in CFD Results

    NASA Technical Reports Server (NTRS)

    Mendenhall, Michael R.; Childs, Robert E.; Morrison, Joseph H.

    2003-01-01

    This paper describes a proposed best-practices system that will present expert knowledge in the use of CFD. The best-practices system will include specific guidelines to assist the user in problem definition, input preparation, grid generation, code selection, parameter specification, and results interpretation. The goal of the system is to assist all CFD users in obtaining high quality CFD solutions with reduced uncertainty and at lower cost for a wide range of flow problems. The best-practices system will be implemented as a software product which includes an expert system made up of knowledge databases of expert information with specific guidelines for individual codes and algorithms. The process of acquiring expert knowledge is discussed, and help from the CFD community is solicited. Benefits and challenges associated with this project are examined.

  20. Design studies of Laminar Flow Control (LFC) wing concepts using superplastics forming and diffusion bonding (SPF/DB)

    NASA Technical Reports Server (NTRS)

    Wilson, V. E.

    1980-01-01

    Alternate concepts and design approaches were developed for suction panels and techniques were defined for integrating these panel designs into a complete LFC 200R wing. The design concepts and approaches were analyzed to assure that they would meet the strength, stability, and internal volume requirements. Cost and weight comparisions of the concepts were also made. Problems of integrating the concepts into a complete aircraft system were addressed. Methods for making splices both chordwise and spanwise, fuel light joints, and internal duct installations were developed. Manufacturing problems such as slot aligment, tapered slot spacing, production methods, and repair techniques were addressed. An assessment of the program was used to developed recommendations for additional research in the development of SPF/DB for LFC structure.

  1. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  2. Cost-driven materials selection criteria for redox flow battery electrolytes

    NASA Astrophysics Data System (ADS)

    Dmello, Rylan; Milshtein, Jarrod D.; Brushett, Fikile R.; Smith, Kyle C.

    2016-10-01

    Redox flow batteries show promise for grid-scale energy storage applications but are presently too expensive for widespread adoption. Electrolyte material costs constitute a sizeable fraction of the redox flow battery price. As such, this work develops a techno-economic model for redox flow batteries that accounts for redox-active material, salt, and solvent contributions to the electrolyte cost. Benchmark values for electrolyte constituent costs guide identification of design constraints. Nonaqueous battery design is sensitive to all electrolyte component costs, cell voltage, and area-specific resistance. Design challenges for nonaqueous batteries include minimizing salt content and dropping redox-active species concentration requirements. Aqueous battery design is sensitive to only redox-active material cost and cell voltage, due to low area-specific resistance and supporting electrolyte costs. Increasing cell voltage and decreasing redox-active material cost present major materials selection challenges for aqueous batteries. This work minimizes cost-constraining variables by mapping the battery design space with the techno-economic model, through which we highlight pathways towards low price and moderate concentration. Furthermore, the techno-economic model calculates quantitative iterations of battery designs to achieve the Department of Energy battery price target of 100 per kWh and highlights cost cutting strategies to drive battery prices down further.

  3. High performance, high density hydrocarbon fuels

    NASA Technical Reports Server (NTRS)

    Frankenfeld, J. W.; Hastings, T. W.; Lieberman, M.; Taylor, W. F.

    1978-01-01

    The fuels were selected from 77 original candidates on the basis of estimated merit index and cost effectiveness. The ten candidates consisted of 3 pure compounds, 4 chemical plant streams and 3 refinery streams. Critical physical and chemical properties of the candidate fuels were measured including heat of combustion, density, and viscosity as a function of temperature, freezing points, vapor pressure, boiling point, thermal stability. The best all around candidate was found to be a chemical plant olefin stream rich in dicyclopentadiene. This material has a high merit index and is available at low cost. Possible problem areas were identified as low temperature flow properties and thermal stability. An economic analysis was carried out to determine the production costs of top candidates. The chemical plant and refinery streams were all less than 44 cent/kg while the pure compounds were greater than 44 cent/kg. A literature survey was conducted on the state of the art of advanced hydrocarbon fuel technology as applied to high energy propellents. Several areas for additional research were identified.

  4. A global range military transport: The ostrich

    NASA Technical Reports Server (NTRS)

    Aguiar, John; Booker, Cecilia; Hoffman, Eric; Kramar, James; Manahan, Orlando; Serranzana, Ray; Taylor, Mike

    1993-01-01

    Studies have shown that there is an increasing need for a global range transport capable of carrying large numbers of troops and equipment to potential trouble spots throughout the world. The Ostrich is a solution to this problem. The Ostrich is capable of carrying 800,000 pounds 6,500 n.m. and returning with 15 percent payload, without refueling. With a technology availability date in 2010 and an initial operating capability of 2015, the aircraft incorporates many advanced technologies including laminar flow control, composite primary structures, and a unique multibody design. By utilizing current technology, such as using McDonnell Douglas C-17 fuselage for the outer fuselages on the Ostrich, the cost for the aircraft was reduced. The cost of the Ostrich per aircraft is $1.2 billion with a direct operating cost of $56,000 per flight hour. The Ostrich will provide a valuable service as a logistical transport capable of rapidly projecting a significant military force or humanitarian aid anywhere in the world.

  5. Shifting orders among suppliers considering risk, price and transportation cost

    NASA Astrophysics Data System (ADS)

    Revitasari, C.; Pujawan, I. N.

    2018-04-01

    Supplier order allocation is an important supply chain decision for an enterprise. It is related to the supplier’s function as a raw material provider and other supporting materials that will be used in production process. Most of works on order allocation has been based on costs and other supply chain performance, but very limited of them taking risks into consideration. In this paper we address the problem of order allocation of a single commodity sourced from multiple suppliers considering supply risks in addition to the attempt of minimizing transportation costs. The supply chain risk was investigated and a procedure was proposed in the risk mitigation phase as a form of risk profile. The objective including risk profile in order allocation is to maximize the product flow from a risky supplier to a relatively less risky supplier. The proposed procedure is applied to a sugar company. The result suggests that order allocations should be maximized to suppliers that have a relatively low risk and minimized to suppliers that have a relatively larger risks.

  6. Asymmetrical reverse vortex flow due to induced-charge electro-osmosis around carbon stacking structures.

    PubMed

    Sugioka, Hideyuki

    2011-05-01

    Broken symmetry of vortices due to induced-charge electro-osmosis (ICEO) around stacking structures is important for the generation of a large net flow in a microchannel. Following theoretical predictions in our previous study, we herein report experimental observations of asymmetrical reverse vortex flows around stacking structures of carbon posts with a large height (~110 μm) in water, prepared by the pyrolysis of a photoresist film in a reducing gas. Further, by the use of a coupled calculation method that considers boundary effects precisely, the experimental results, except for the problem of anomalous flow reversal, are successfully explained. That is, unlike previous predictions, the precise calculations here show that stacking structures accelerate a reverse flow rather than suppressing it for a microfluidic channel because of the deformation of electric fields near the stacking portions; these structures can also generate a large net flow theoretically in the direction opposite that of a previous prediction for a standard vortex flow. Furthermore, by solving the one-dimensional Poisson-Nernst-Plank (PNP) equations in the presence of ac electric fields, we find that the anomalous flow reversal occurs by the phase retardation between the induced diffuse charge and the tangential electric field. In addition, we successfully explain the nonlinearity of the flow velocity on the applied voltage by the PNP analysis. In the future, we expect to improve the pumping performance significantly by using stacking structures of conductive posts along with a low-cost process. © 2011 American Physical Society

  7. A Sarsa(λ)-Based Control Model for Real-Time Traffic Light Coordination

    PubMed Central

    Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei

    2014-01-01

    Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control. PMID:24592183

  8. System Guidelines for EMC Safety-Critical Circuits: Design, Selection, and Margin Demonstration

    NASA Technical Reports Server (NTRS)

    Lawton, R. M.

    1996-01-01

    Demonstration of required safety margins on critical electrical/electronic circuits in large complex systems has become an implementation and cost problem. These margins are the difference between the activation level of the circuit and the electrical noise on the circuit in the actual operating environment. This document discusses the origin of the requirement and gives a detailed process flow for the identification of the system electromagnetic compatibility (EMC) critical circuit list. The process flow discusses the roles of engineering disciplines such as systems engineering, safety, and EMC. Design and analysis guidelines are provided to assist the designer in assuring the system design has a high probability of meeting the margin requirements. Examples of approaches used on actual programs (Skylab and Space Shuttle Solid Rocket Booster) are provided to show how variations of the approach can be used successfully.

  9. Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids

    NASA Astrophysics Data System (ADS)

    Ma, Xinrong; Duan, Zhijian

    2018-04-01

    High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.

  10. Numerical studies of the polymer melt flow in the extruder screw channel and the forming tool

    NASA Astrophysics Data System (ADS)

    Ershov, S. V.; Trufanova, N. M.

    2017-06-01

    To date, polymer compositions based on polyethylene or PVC is widely used as insulating materials. These materials processing conjugate with a number of problems during selection of the rational extrusion regimes. To minimize the time and cost when determining the technological regime uses mathematical modeling techniques. The paper discusses heat and mass transfer processes in the extruder screw channel, output adapter and the cable head. During the study were determined coefficients for three rheological models based on obtained viscosity vs. shear rate experimental data. Also a comparative analysis of this viscosimetric laws application possibility for studying polymer melt flow during its processing on the extrusion equipment was held. As a result of numerical study the temperature, viscosity and shear rate fields in the extruder screw channel and forming tool were obtained.

  11. Comparison of Series of Vugs and Non-vuggy Synthetic Porous Media on Formation Damage

    NASA Astrophysics Data System (ADS)

    Khan, H.; DiCarlo, D. A.; Prodanovic, M.

    2017-12-01

    Produced water reinjection (PWRI) is an established cost-effective oil field practice where produced water is injected without any cleanup, for water flooding or disposal. Resultantly the cost of fresh injection fluid and/or processing produced water is saved. A common problem with injection of unprocessed water is formation damage in the near injection zone due to solids (fines) entrapment, causing a reduction in permeability and porosity of the reservoir. Most studies have used homogeneous porous media with unimodal grain sizes, while real world porous media often has a wide range of pores, up to and including vugs in carbonaceous rocks. Here we fabricate a series of vugs in synthetic porous media by sintering glass beads with large dissolvable inclusions. The process is found to be repeatable, allowing a similar vug configuration to be tested for different flow conditions. Bi-modal glass bead particles (25 & 100 micron) are injected at two different flow rates and three different injection concentrations. Porosity, permeability and effluent concentration are determined using CT scanning, pressure measurements and particle counting (Coulter counter), respectively. Image analysis is performed on the CT images to determine the change in vug size for each flow condition. We find that for the same flow conditions, heterogeneous media with series of vugs have an equal or greater permeability loss compared to homogeneous porous media. A significant change in permeability is observed at the highest concentration and flow rate as more particles approach the filter quickly, resulting in a greater loss in permeability in the lower end of the core. Image analysis shows the highest loss in vug size occurs at the low flow rate and highest concentration. The lower vug is completely blocked for this flow case. For all flow cases lower values of porosity are observed after the core floods. At low flow rate and medium concentration, a drastic loss in porosity is observed in the lower part of the core, after the vuggy zone. This trough is also distinctly clear in the homogeneous core for the same flow conditions. This study focuses on understanding the effect of pore heterogeneity on formation damage. We conclude that more damage is done deeper in vuggy formations at high flow rates, resulting in shorter injection cycle prior to clean up.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Aladsair J.; Viswanathan, Vilayanur V.; Stephenson, David E.

    A robust performance-based cost model is developed for all-vanadium, iron-vanadium and iron chromium redox flow batteries. Systems aspects such as shunt current losses, pumping losses and thermal management are accounted for. The objective function, set to minimize system cost, allows determination of stack design and operating parameters such as current density, flow rate and depth of discharge (DOD). Component costs obtained from vendors are used to calculate system costs for various time frames. A 2 kW stack data was used to estimate unit energy costs and compared with model estimates for the same size electrodes. The tool has been sharedmore » with the redox flow battery community to both validate their stack data and guide future direction.« less

  13. A quiet flow Ludwieg tube for study of transition in compressible boundary layers: Design and feasibility

    NASA Technical Reports Server (NTRS)

    Schneider, Steven P.

    1990-01-01

    Since Ludwieg tubes have been around for many years, and NASA has already established the feasibility of creating quiet-flow wind tunnels, the major question addressed was the cost of the proposed facility. Cost estimates were obtained for major system components, and new designs which allowed fabrication at lower cost were developed. A large fraction of the facility cost comes from the fabrication of the highly polished quiet-flow supersonic nozzle. Methods for the design of this nozzle were studied at length in an attempt to find an effective but less expensive design. Progress was sufficient to show that a quality facility can be fabricated at a reasonable cost.

  14. Theoretical and Numerical Studies of a Vortex - Interaction Problem

    NASA Astrophysics Data System (ADS)

    Hsu, To-Ming

    The problem of vortex-airfoil interaction has received considerable interest in the helicopter industry. This phenomenon has been shown to be a major source of noise, vibration, and structural fatigue in helicopter flight. Since unsteady flow is always associated with vortex shedding and movement of free vortices, the problem of vortex-airfoil interaction also serves as a basic building block in unsteady aerodynamics. A careful study of the vortex-airfoil interaction reveals the major effects of the vortices on the generation of unsteady aerodynamic forces, especially the lift. The present work establishes three different flow models to study the vortex-airfoil interaction problem: a theoretical model, an inviscid flow model, and a viscous flow model. In the first two models, a newly developed aerodynamic force theorem has been successfully applied to identify the contributions to unsteady forces from various vortical systems in the flow field. Through viscous flow analysis, different features of laminar interaction, turbulent attached interaction, and turbulent separated interaction are examined. Along with the study of the vortex-airfoil interaction problem, several new schemes are developed for inviscid and viscous flow solutions. New formulas are derived to determine the trailing edge flow conditions, such as flow velocity and direction, in unsteady inviscid flow. A new iteration scheme that is faster for higher Reynolds number is developed for solving the viscous flow problem.

  15. Model Order Reduction for the fast solution of 3D Stokes problems and its application in geophysical inversion

    NASA Astrophysics Data System (ADS)

    Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro

    2017-04-01

    The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.

  16. Bubble-free on-chip continuous-flow polymerase chain reaction: concept and application.

    PubMed

    Wu, Wenming; Kang, Kyung-Tae; Lee, Nae Yoon

    2011-06-07

    Bubble formation inside a microscale channel is a significant problem in general microfluidic experiments. The problem becomes especially crucial when performing a polymerase chain reaction (PCR) on a chip which is subject to repetitive temperature changes. In this paper, we propose a bubble-free sample injection scheme applicable for continuous-flow PCR inside a glass/PDMS hybrid microfluidic chip, and attempt to provide a theoretical basis concerning bubble formation and elimination. Highly viscous paraffin oil plugs are employed in both the anterior and posterior ends of a sample plug, completely encapsulating the sample and eliminating possible nucleation sites for bubbles. In this way, internal channel pressure is increased, and vaporization of the sample is prevented, suppressing bubble formation. Use of an oil plug in the posterior end of the sample plug aids in maintaining a stable flow of a sample at a constant rate inside a heated microchannel throughout the entire reaction, as compared to using an air plug. By adopting the proposed sample injection scheme, we demonstrate various practical applications. On-chip continuous-flow PCR is performed employing genomic DNA extracted from a clinical single hair root sample, and its D1S80 locus is successfully amplified. Also, chip reusability is assessed using a plasmid vector. A single chip is used up to 10 times repeatedly without being destroyed, maintaining almost equal intensities of the resulting amplicons after each run, ensuring the reliability and reproducibility of the proposed sample injection scheme. In addition, the use of a commercially-available and highly cost-effective hot plate as a potential candidate for the heating source is investigated.

  17. In situ determination of heat flow in unconsolidated sediments

    USGS Publications Warehouse

    Sass, J.H.; Kennelly, J.P.; Wendt, W.E.; Moses, T.H.; Ziagos, J.P.

    1979-01-01

    Subsurface thermal measurements are the most effective, least ambiguous tools for identifying and delineating possible geothernml resources. Measurements of thermal gradient in the upper few tens of meters generally are sufficient to outline the major anomalies, but it is always desirable to combine these gradients with reliable estimates of thermal conductivity to provide data on the energy flux and to constrain models for the heat sources responsible for the observed, near-surface thermal anomalies. The major problems associated with heat-flow measurements in the geothermal exploration mode are concerned with the economics of casing and/or grouting holes, the repeated site visits necessary to obtain equilibrium temperature values, the possible legal liability associated with the disturbance of underground aquifers, the surface hazards presented by pipes protruding from the ground, and the security problems associated with leaving cased holes open for periods of weeks to months. We have developed a technique which provides reliable 'real-time' determinations of temperature, thermal conductivity, and hence, of heat flow during the drilling operation in unconsolidated sediments. A combined temperature, gradient, and thermal conductivity experiment can be carried out, by driving a thin probe through the bit about 1.5 meters into the formation in the time that would otherwise be required for a coring trip. Two or three such experiments over the depth range of, say, 50 to 150 meters provide a high-quality heat-flow determination at costs comparable to those associated with a standard cased 'gradient hole' to comparable depths. The hole can be backfilled and abandoned upon cessation of drilling, thereby eliminating the need for casing, grouting, or repeated site visits.

  18. Phase Field Model of Hydraulic Fracturing in Poroelastic Media: Fracture Propagation, Arrest, and Branching Under Fluid Injection and Extraction

    NASA Astrophysics Data System (ADS)

    Santillán, David; Juanes, Ruben; Cueto-Felgueroso, Luis

    2018-03-01

    The simulation of fluid-driven fracture propagation in a porous medium is a major computational challenge, with applications in geosciences and engineering. The two main families of modeling approaches are those models that represent fractures as explicit discontinuities and solve the moving boundary problem and those that represent fractures as thin damaged zones, solving a continuum problem throughout. The latter family includes the so-called phase field models. Continuum approaches to fracture face validation and verification challenges, in particular grid convergence, well posedness, and physical relevance in practical scenarios. Here we propose a new quasi-static phase field formulation. The approach fully couples fluid flow in the fracture with deformation and flow in the porous medium, discretizes flow in the fracture on a lower-dimension manifold, and employs the fluid flux between the fracture and the porous solid as coupling variable. We present a numerical assessment of the model by studying the propagation of a fracture in the quarter five-spot configuration. We study the interplay between injection flow rate and rock properties and elucidate fracture propagation patterns under the leak-off toughness dominated regime as a function of injection rate, initial fracture length, and poromechanical properties. For the considered injection scenario, we show that the final fracture length depends on the injection rate, and three distinct patterns are observed. We also rationalize the system response using dimensional analysis to collapse the model results. Finally, we propose some simplifications that alleviate the computational cost of the simulations without significant loss of accuracy.

  19. An Aeroelastic Analysis of a Thin Flexible Membrane

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Bartels, Robert E.; Kandil, Osama A.

    2007-01-01

    Studies have shown that significant vehicle mass and cost savings are possible with the use of ballutes for aero-capture. Through NASA's In-Space Propulsion program, a preliminary examination of ballute sensitivity to geometry and Reynolds number was conducted, and a single-pass coupling between an aero code and a finite element solver was used to assess the static aeroelastic effects. There remain, however, a variety of open questions regarding the dynamic aeroelastic stability of membrane structures for aero-capture, with the primary challenge being the prediction of the membrane flutter onset. The purpose of this paper is to describe and begin addressing these issues. The paper includes a review of the literature associated with the structural analysis of membranes and membrane utter. Flow/structure analysis coupling and hypersonic flow solver options are also discussed. An approach is proposed for tackling this problem that starts with a relatively simple geometry and develops and evaluates analysis methods and procedures. This preliminary study considers a computationally manageable 2-dimensional problem. The membrane structural models used in the paper include a nonlinear finite-difference model for static and dynamic analysis and a NASTRAN finite element membrane model for nonlinear static and linear normal modes analysis. Both structural models are coupled with a structured compressible flow solver for static aeroelastic analysis. For dynamic aeroelastic analyses, the NASTRAN normal modes are used in the structured compressible flow solver and 3rd order piston theories were used with the finite difference membrane model to simulate utter onset. Results from the various static and dynamic aeroelastic analyses are compared.

  20. Coal flow aids reduce coke plant operating costs and improve production rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bedard, R.A.; Bradacs, D.J.; Kluck, R.W.

    2005-06-01

    Chemical coal flow aids can provide many benefits to coke plants, including improved production rates, reduced maintenance and lower cleaning costs. This article discusses the mechanisms by which coal flow aids function and analyzes several successful case histories. 2 refs., 10 figs., 1 tab.

  1. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    NASA Astrophysics Data System (ADS)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative permeability) should account for the fact that they depend not only on the saturation but also on the actual characteristics of the fluid distribution.

  2. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  3. A multiobjective optimization framework for multicontaminant industrial water network design.

    PubMed

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Design of Solar Heat Sheet for Air Heaters

    NASA Astrophysics Data System (ADS)

    Priya, S. Shanmuga; Premalatha, M.; Thirunavukkarasu, I.

    2011-12-01

    The technique of harnessing solar energy for drying offers significant potential to dry agricultural products such as food grains, fruits, vegetables and medicinal plants, thereby eliminating many of the problems experienced with open-sun drying and industrial drying, besides saving huge quantities of fossil fuels. A great deal of experimental work over the last few decades has already demonstrated that agricultural products can be satisfactorily dehydrated using solar energy. Various designs of small scale solar dryers have been developed in the recent past, mainly for drying agricultural products. Major problems experienced with solar dryers are their non-reliability as their operation largely depends on local weather conditions. While back-up heaters and hybrid dryers partly solved this issue, difficulties in controlling the drying air temperature and flow rate remains a problem, and affects the quality of the dried product. This study is aimed at eliminating the fluctuations in the quality of hot air supplied by simple solar air heaters used for drying fruits, vegetables and other applications. It is an attempt to analyse the applicability of the combination of an glazed transpired solar collector (tank), thermal storage and a intake fan(suction fan) to achieve a steady supply of air at a different atmospheric temperature and flow rate for drying fruits and vegetables. Development of an efficient, low-cost and reliable air heating system for drying applications is done.

  5. An integrated Navier-Stokes - full potential - free wake method for rotor flows

    NASA Astrophysics Data System (ADS)

    Berkman, Mert Enis

    1998-12-01

    The strong wake shed from rotary wings interacts with almost all components of the aircraft, and alters the flow field thus causing performance and noise problems. Understanding and modeling the behavior of this wake, and its effect on the aerodynamics and acoustics of helicopters have remained as challenges. This vortex wake and its effect should be accurately accounted for in any technique that aims to predict rotor flow field and performance. In this study, an advanced and efficient computational technique for predicting three-dimensional unsteady viscous flows over isolated helicopter rotors in hover and in forward flight is developed. In this hybrid technique, the advantages of various existing methods have been combined to accurately and efficiently study rotor flows with a single numerical method. The flow field is viewed in three parts: (i) an inner zone surrounding each blade where the wake and viscous effects are numerically captured, (ii) an outer zone away from the blades where wake is modeled, and (iii) a Lagrangean wake which induces wake effects in the outer zone. This technique was coded in a flow solver and compared with experimental data for hovering and advancing rotors including a two-bladed rotor, the UH-60A rotor and a tapered tip rotor. Detailed surface pressure, integrated thrust and torque, sectional thrust, and tip vortex position predictions compared favorably against experimental data. Results indicated that the hybrid solver provided accurate flow details and performance information typically in one-half to one-eighth cost of complete Navier-Stokes methods.

  6. Precipitation patterns during channel flow

    NASA Astrophysics Data System (ADS)

    Jamtveit, B.; Hawkins, C.; Benning, L. G.; Meier, D.; Hammer, O.; Angheluta, L.

    2013-12-01

    Mineral precipitation during channelized fluid flow is widespread in a wide variety of geological systems. It is also a common and costly phenomenon in many industrial processes that involve fluid flow in pipelines. It is often referred to as scale formation and encountered in a large number of industries, including paper production, chemical manufacturing, cement operations, food processing, as well as non-renewable (i.e. oil and gas) and renewable (i.e. geothermal) energy production. We have studied the incipient stages of growth of amorphous silica on steel plates emplaced into the central areas of the ca. 1 meter in diameter sized pipelines used at the hydrothermal power plant at Hellisheidi, Iceland (with a capacity of ca 300 MW electricity and 100 MW hot water). Silica precipitation takes place over a period of ca. 2 months at approximately 120°C and a flow rate around 1 m/s. The growth produces asymmetric ca. 1mm high dendritic structures ';leaning' towards the incoming fluid flow. A novel phase-field model combined with the lattice Boltzmann method is introduced to study how the growth morphologies vary under different hydrodynamic conditions, including non-laminar systems with turbulent mixing. The model accurately predicts the observed morphologies and is directly relevant for understanding the more general problem of precipitation influenced by turbulent mixing during flow in channels with rough walls and even for porous flow. Reference: Hawkins, C., Angheluta, L., Hammer, Ø., and Jamtveit, B., Precipitation dendrites in channel flow. Europhysics Letters, 102, 54001

  7. [Cost-benefit analysis of the implementation of automated drug-dispensing systems in Critical Care and Emergency Units].

    PubMed

    Poveda Andrés, J L; García Gómez, C; Hernández Sansalvador, M; Valladolid Walsh, A

    2003-01-01

    To determine monetary impact when traditional drug floor stocks are replaced by Automated Drug Dispensing Systems (ADDS) in the Medical Intensive Care Unit, Surgical Intensive Care Unit and the Emergency Room. We analysed four different flows considered to be determinant when implementing ADDS in a hospital environment: capital investment, staff costs, inventory costs and costs related to drug use policies. Costs were estimated by calculation of the current net value. Its analysis shows that those expenses derived from initial investment are compensated by the three remaining flows, with costs related to drug use policies showing the most substantial savings. Five years after initial investment, global cash-flows have been estimated at 300.525 euros. Replacement of traditional floor stocks by ADDS in the Medical Intensive Care Unit, Surgery Intensive Care Unit and the Emergency Room produces a positive benefit/cost ratio (1.95).

  8. Study of the Cost and Flows of Capital in the Guaranteed Student Loan Program. Final Report.

    ERIC Educational Resources Information Center

    Touche Ross and Co., Washington, DC.

    The flow of capital to and through the Guaranteed Student Loan (GSL) Program and the cost of that capital to the federal government and the individual borrower were studied. A review of the research on student loan capital was conducted, and automated cost models were developed to test assumptions and project future costs. Attention was directed…

  9. A zonal method for modeling powered-lift aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Roberts, D. W.

    1989-01-01

    A zonal method for modeling powered-lift aircraft flow fields is based on the coupling of a three-dimensional Navier-Stokes code to a potential flow code. By minimizing the extent of the viscous Navier-Stokes zones the zonal method can be a cost effective flow analysis tool. The successful coupling of the zonal solutions provides the viscous/inviscid interations that are necessary to achieve convergent and unique overall solutions. The feasibility of coupling the two vastly different codes is demonstrated. The interzone boundaries were overlapped to facilitate the passing of boundary condition information between the codes. Routines were developed to extract the normal velocity boundary conditions for the potential flow zone from the viscous zone solution. Similarly, the velocity vector direction along with the total conditions were obtained from the potential flow solution to provide boundary conditions for the Navier-Stokes solution. Studies were conducted to determine the influence of the overlap of the interzone boundaries and the convergence of the zonal solutions on the convergence of the overall solution. The zonal method was applied to a jet impingement problem to model the suckdown effect that results from the entrainment of the inviscid zone flow by the viscous zone jet. The resultant potential flow solution created a lower pressure on the base of the vehicle which produces the suckdown load. The feasibility of the zonal method was demonstrated. By enhancing the Navier-Stokes code for powered-lift flow fields and optimizing the convergence of the coupled analysis a practical flow analysis tool will result.

  10. Costs of Producing Biomass from Riparian Buffer Strips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turhollow, A.

    2000-09-01

    Nutrient runoff from poultry litter applied to agricultural fields in the Delmarva Peninsula contributes to high nutrient loadings in Chesapeake Bay. One potential means of ameliorating this problem is the use of riparian buffer strips. Riparian buffer strips intercept overland flows of water, sediments, nutrients, and pollutants; and ground water flows of nutrients and pollutants. Costs are estimated for three biomass systems grown on buffer strips: willow planted at a density of 15,300 trees/ha (6200 trees/acre); poplar planted at a density of 1345 trees/ha (545 trees/acre); and switchgrass. These costs are estimated for five different scenarios: (1) total economic costs,more » where everything is costed [cash costs, noncash costs (e.g., depreciation), land rent, labor]; (2) costs with Conservation Reserve Program (CRP) payments (which pays 50% of establishment costs and an annual land rent); (3) costs with enhanced CRP payments (which pays 95% of establishment costs and an annual payment of approximately 170% of land rent for trees and 150% of land rent for grasses); (4) costs when buffer strips are required, but harvest of biomass is not required [costs borne by biomass are for yield enhancing activities (e.g., fertilization), harvest, and transport]; and (5) costs when buffer strips are required. and harvest of biomass is required to remove nutrients (costs borne by biomass are for yield enhancing activities and transport). CRP regulations would have to change to allow harvest. Delivered costs of willow, poplar, and switchgrass [including transportation costs of $0.38/GJ ($0.40/million Btu) for switchgrass and $0.57/GJ ($0.60/million Btu) for willow and poplar] at 11.2 dry Mg/ha-year (5 dry tons/acre-year) for the five cost scenarios listed above are [$/GJ ($million BIN)]: (1) 3.30-5.45 (3.45-5.75); (2) 2.30-3.80 (2.45-4.00); (3) 1.70-2.45 (1.80-2.60); (4) l-85-3.80 (1.95-4.05); and (5) 0.80-1.50 (0.85-1.60). At yields of 15.7 to 17.9 GJ/ha-year (7 to 8 dry tons/acre-year), lower willow and poplar establishment costs, transportation costs of $0.30 to $0.45/GJ ($0.30-$0.50/million Btu), and lower willow and poplar harvest costs, total economic costs for willow (19-year stand life), poplar, and switchgrass are $2.35 to $2.6O/GJ ($2.50 to $2.75/million Btu). The potential production of biomass from riparian buffer strips in the Delmarva Peninsula ranges from 190,000 to 380,000 Mg (2 10,000 to 420,000 dry tons) per year.« less

  11. Cost/Benefit considerations for recent saltcedar control, Middle Pecos River, New Mexico.

    PubMed

    Barz, Dave; Watson, Richard P; Kanney, Joseph F; Roberts, Jesse D; Groeneveld, David P

    2009-02-01

    Major benefits were weighed against major costs associated with recent saltcedar control efforts along the Middle Pecos River, New Mexico. The area of study was restricted to both sides of the channel and excluded tributaries along the 370 km between Sumner and Brantley dams. Direct costs (helicopter spraying, dead tree removal, and revegetation) within the study area were estimated to be $2.2 million but possibly rising to $6.4 million with the adoption of an aggressive revegetation program. Indirect costs associated with increased potential for erosion and reservoir sedimentation would raise the costs due to increased evaporation from more extensive shallows in the Pecos River as it enters Brantley Reservoir. Actions such as dredging are unlikely given the conservative amount of sediment calculated (about 1% of the reservoir pool). The potential for water salvage was identified as the only tangible benefit likely to be realized under the current control strategy. Estimates of evapotranspiration (ET) using Landsat TM data allowed estimation of potential water salvage as the difference in ET before and after treatment, an amount totaling 7.41 million m(3) (6010 acre-ft) per year. Previous saltcedar control efforts of roughly the same magnitude found that salvaged ET recharged groundwater and no additional flows were realized within the river. Thus, the value of this recharge is probably less than the lowest value quoted for actual in-channel flow, and estimated to be <$63,000 per year. Though couched in terms of costs and benefits, this paper is focused on what can be considered the key trade-off under a complete eradication strategy: water salvage vs. erosion and sedimentation. It differs from previous efforts by focusing on evaluating the impacts of actual control efforts within a specific system. Total costs (direct plus potential indirect) far outweighed benefits in this simple comparison and are expected to be ongoing. Problems induced by saltcedar control may permanently reduce reservoir capacity and increase reservoir evaporation rates, which could further deplete supplies on this water short system. These potential negative consequences highlight that such costs and benefits need to be considered before initiating extensive saltcedar control programs on river systems of the western United States.

  12. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  13. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    NASA Astrophysics Data System (ADS)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  14. MOD-2 wind turbine development

    NASA Technical Reports Server (NTRS)

    Gordon, L. H.; Andrews, J. S.; Zimmerman, D. K.

    1983-01-01

    The development of the Mod-2 turbine, designed to achieve a cost of electricity for the 100th production unit that will be competitive with conventional electric power generation is discussed. The Mod-2 wind turbine system (WTS) background, project flow, and a chronology of events and problem areas leading to Mod-2 acceptance are addressed. The role of the participating utility during site preparation, turbine erection and testing, remote operation, and routine operation and maintenance activity is reviewed. The technical areas discussed pertain to system performance, loads, and controls. Research and technical development of multimegawatt turbines is summarized.

  15. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    NASA Astrophysics Data System (ADS)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  16. Optimization design of submerged propeller in oxidation ditch by computational fluid dynamics and comparison with experiments.

    PubMed

    Zhang, Yuquan; Zheng, Yuan; Fernandez-Rodriguez, E; Yang, Chunxia; Zhu, Yantao; Liu, Huiwen; Jiang, Hao

    The operating condition of a submerged propeller has a significant impact on flow field and energy consumption of the oxidation ditch. An experimentally validated numerical model, based on the computational fluid dynamics (CFD) tool, is presented to optimize the operating condition by considering two important factors: flow field and energy consumption. Performance demonstration and comparison of different operating conditions were carried out in a Carrousel oxidation ditch at the Yingtang wastewater treatment plants in Anhui Province, China. By adjusting the position and rotating speed together with the number of submerged propellers, problems of sludge deposit and the low velocity in the bend could be solved in a most cost-effective way. The simulated results were acceptable compared with the experimental data and the following results were obtained. The CFD model characterized flow pattern and energy consumption in the full-scale oxidation ditch. The predicted flow field values were within -1.28 ± 7.14% difference from the measured values. By determining three sets of propellers under the rotating speed of 6.50 rad/s with one located 5 m from the first curved wall, after numerical simulation and actual measurement, not only the least power density but also the requirement of the flow pattern could be realized.

  17. Effect of Turbulence Modeling on an Excited Jet

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.; Hixon, Ray

    2010-01-01

    The flow dynamics in a high-speed jet are dominated by unsteady turbulent flow structures in the plume. Jet excitation seeks to control these flow structures through the natural instabilities present in the initial shear layer of the jet. Understanding and optimizing the excitation input, for jet noise reduction or plume mixing enhancement, requires many trials that may be done experimentally or computationally at a significant cost savings. Numerical simulations, which model various parts of the unsteady dynamics to reduce the computational expense of the simulation, must adequately capture the unsteady flow dynamics in the excited jet for the results are to be used. Four CFD methods are considered for use in an excited jet problem, including two turbulence models with an Unsteady Reynolds Averaged Navier-Stokes (URANS) solver, one Large Eddy Simulation (LES) solver, and one URANS/LES hybrid method. Each method is used to simulate a simplified excited jet and the results are evaluated based on the flow data, computation time, and numerical stability. The knowledge gained about the effect of turbulence modeling and CFD methods from these basic simulations will guide and assist future three-dimensional (3-D) simulations that will be used to understand and optimize a realistic excited jet for a particular application.

  18. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  19. Advanced manufacturing rules check (MRC) for fully automated assessment of complex reticle designs

    NASA Astrophysics Data System (ADS)

    Gladhill, R.; Aguilar, D.; Buck, P. D.; Dawkins, D.; Nolke, S.; Riddick, J.; Straub, J. A.

    2005-11-01

    Advanced electronic design automation (EDA) tools, with their simulation, modeling, design rule checking, and optical proximity correction capabilities, have facilitated the improvement of first pass wafer yields. While the data produced by these tools may have been processed for optimal wafer manufacturing, it is possible for the same data to be far from ideal for photomask manufacturing, particularly at lithography and inspection stages, resulting in production delays and increased costs. The same EDA tools used to produce the data can be used to detect potential problems for photomask manufacturing in the data. A production implementation of automated photomask manufacturing rule checking (MRC) is presented and discussed for various photomask lithography and inspection lines. This paper will focus on identifying data which may cause production delays at the mask inspection stage. It will be shown how photomask MRC can be used to discover data related problems prior to inspection, separating jobs which are likely to have problems at inspection from those which are not. Photomask MRC can also be used to identify geometries requiring adjustment of inspection parameters for optimal inspection, and to assist with any special handling or change of routing requirements. With this foreknowledge, steps can be taken to avoid production delays that increase manufacturing costs. Finally, the data flow implemented for MRC can be used as a platform for other photomask data preparation tasks.

  20. Gel compression considerations for chromatography scale-up for protein C purification.

    PubMed

    He, W; Bruley, D F; Drohan, W N

    1998-01-01

    This work is to establish theoretical and experimental relationships for the scale-up of Immobilized Metal Affinity Chromatography (IMAC) and Immuno Affinity Chromatography for the low cost production of large quantities of Protein C. The external customer requirements for this project have been established for Protein C deficient people with the goal of providing prophylactic patient treatment. Deep vein thrombosis is the major symptom for protein C deficiency creating the potential problem of embolism transport to important organs, such as, lung and brain. Gel matrices for protein C separation are being analyzed to determine the relationship between the material properties of the gel and the column collapse characteristics. The fluid flow rate and pressure drop is being examined to see how they influence column stability. Gel packing analysis includes two considerations; one is bulk compression due to flow rate, and the second is gel particle deformation due to fluid flow and pressure drop. Based on the assumption of creeping flow, Darcy's law is being applied to characterize the flow through the gel particles. Biot's mathematical description of three-dimensional consolidation in porous media is being used to develop a set of system equations. Finite difference methods are being utilized to obtain the equation solutions. In addition, special programs such as finite element approaches, ABAQUS, will be studied to determine their application to this particular problem. Experimental studies are being performed to determine flow rate and pressure drop correlation for the chromatographic columns with appropriate gels. Void fraction is being measured using pulse testing to allow Reynolds number calculations. Experimental yield stress is being measured to compare with the theoretical calculations. Total Quality Management (TQM) tools have been utilized to optimize this work. For instance, the "Scatter Diagram" has been used to evaluate and select the appropriate gels and operating conditions via Taguchi techniques. Targeting customer requirements under the structure of TQM represents a novel approach to graduate student research in an academic institution which is designed to simulate an industrial environment.

  1. Development of bridge-scour instrumentation for inspection and maintenance personnel

    USGS Publications Warehouse

    Mueller, David S.; Landers, Mark N.; ,

    1993-01-01

    Inspecting bridges and monitoring scour during high flow can improve public transportation safety by providing early identification of scour and stream stability problems at bridges. Most bridge-inspection data are collected during low flow, when scour holes may have refilled. More than 25 percent of the States that responded to a questionnaire identified lack of adequate methodology and/or equipment as reasons for not collecting scour data during high-flow conditions. Therefore, the U.S. Geological Survey (USGS), in cooperation with the Federal Highway Administration, has begun to develop instrumentation for measuring scour that could be used by inspection and maintenance personnel during high-flow conditions. A variety of instruments and techniques for measuring scour were tested and evaluated in real-time bridge-scour data-collection studies by the USGS. In the National Scour study, fathometers were found to be superior to sounding weights and will be the primary bed-measuring instrument. The ability of low-cost fathometers and fish finders to locate the bed accurately is being evaluated. Simple and efficient methods for deploying the transducer during floods are also important for a successful measurement. The information and additional testing are being used to design new, portable scour-measuring systems.

  2. Proper Orthogonal Decomposition in Optimal Control of Fluids

    NASA Technical Reports Server (NTRS)

    Ravindran, S. S.

    1999-01-01

    In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.

  3. Optimal dynamic water allocation: Irrigation extractions and environmental tradeoffs in the Murray River, Australia

    NASA Astrophysics Data System (ADS)

    Grafton, R. Quentin; Chu, Hoang Long; Stewardson, Michael; Kompas, Tom

    2011-12-01

    A key challenge in managing semiarid basins, such as in the Murray-Darling in Australia, is to balance the trade-offs between the net benefits of allocating water for irrigated agriculture, and other uses, versus the costs of reduced surface flows for the environment. Typically, water planners do not have the tools to optimally and dynamically allocate water among competing uses. We address this problem by developing a general stochastic, dynamic programming model with four state variables (the drought status, the current weather, weather correlation, and current storage) and two controls (environmental release and irrigation allocation) to optimally allocate water between extractions and in situ uses. The model is calibrated to Australia's Murray River that generates: (1) a robust qualitative result that "pulse" or artificial flood events are an optimal way to deliver environmental flows over and above conveyance of base flows; (2) from 2001 to 2009 a water reallocation that would have given less to irrigated agriculture and more to environmental flows would have generated between half a billion and over 3 billion U.S. dollars in overall economic benefits; and (3) water markets increase optimal environmental releases by reducing the losses associated with reduced water diversions.

  4. Optimization of an innovative hollow-fiber process to produce lactose-reduced skim milk.

    PubMed

    Neuhaus, Winfried; Novalin, Senad; Klimacek, Mario; Splechtna, Barbara; Petzelbauer, Inge; Szivak, Alexander; Kulbe, Klaus D

    2006-07-01

    The research field for applications of lactose hydrolysis has been investigated for several decades. Lactose intolerance, improvement for technical processing of solutions containing lactose, and utilization of lactose in whey are the main topics for development of biotechnological processes. We report here the optimization of a hollow-fiber membrane reactor process for enzymatic lactose hydrolysis. Lactase was circulated abluminally during luminal flow of skim milk. The main problem, the growth of microorganisms in the enzyme solution, was minimized by sterile filtration, ultraviolet irradiation, and temperature adjustment. Based on previous experiments at 23 +/- 2 degrees C, further characterization was carried out at 8 +/- 2 degrees C, 15 +/- 2 degrees C (beta-galactosidase), and 58 +/- 2 degrees C (thermostable beta-glycosidase) varying enzyme activity and flow rates. For a cost-effective process, the parameters 15 +/- 2 degrees C, 240 U/mL of beta-galactosidase, an enzyme solution flow rate of 25 L/h, and a skim milk flow rate of about 9 L/h should be used in order to achieve an aimed productivity of 360 g/(L x h) and to run at conditions for the highest process long-term stability.

  5. Enhancing the Performance of Vanadium Redox Flow Batteries using Quinones

    NASA Astrophysics Data System (ADS)

    Mulcahy, James W., III

    The global dependence on fossil fuels continues to increase while the supply diminishes, causing the proliferation in demand for renewable energy sources. Intermittent renewable energy sources such as wind and solar, require electrochemical storage devices in order to transfer stored energy to the power grid at a constant output. Redox flow batteries (RFB) have been studied extensively due to improvements in scalability, cyclability and efficiency over conventional batteries. Vanadium redox flow batteries (VRFB) provide one of the most comprehensive solutions to energy storage in relation to other RFBs by alleviating the problem of cross-contamination. Quinones are a class of organic compounds that have been extensively used in chemistry, biochemistry and pharmacology due to their catalytic properties, fast proton-coupled electron transfer, good chemical stability and low cost. Anthraquinones are a subcategory of quinones and have been utilized in several battery systems. Anthraquinone-2, 6-disulfonic acid (AQDS) was added to a VRFB in order to study its effects on cyclical performance. This study utilized carbon paper electrodes and a Nafion 117 ion exchange membrane for the membrane-electrode assembly (MEA). The cycling performance was investigated over multiple charge and discharge cycles and the addition of AQDS was found to increase capacity efficiency by an average of 7.6% over the standard VRFB, while decreasing the overall cycle duration by approximately 18%. It is thus reported that the addition of AQDS to a VRFB electrolyte has the potential to increase the activity and capacity with minimal increases in costs.

  6. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  7. Solving a Production Scheduling Problem by Means of Two Biobjective Metaheuristic Procedures

    NASA Astrophysics Data System (ADS)

    Toncovich, Adrián; Oliveros Colay, María José; Moreno, José María; Corral, Jiménez; Corral, Rafael

    2009-11-01

    Production planning and scheduling problems emphasize the need for the availability of management tools that can help to assure proper service levels to customers, maintaining, at the same time, the production costs at acceptable levels and maximizing the utilization of the production facilities. In this case, a production scheduling problem that arises in the context of the activities of a company dedicated to the manufacturing of furniture for children and teenagers is addressed. Two bicriteria metaheuristic procedures are proposed to solve the sequencing problem in a production equipment that constitutes the bottleneck of the production process of the company. The production scheduling problem can be characterized as a general flow shop with sequence dependant setup times and additional inventory constraints. Two objectives are simultaneously taken into account when the quality of the candidate solutions is evaluated: the minimization of completion time of all jobs, or makespan, and the minimization of the total flow time of all jobs. Both procedures are based on a local search strategy that responds to the structure of the simulated annealing metaheuristic. In this case, both metaheuristic approaches generate a set of solutions that provides an approximation to the optimal Pareto front. In order to evaluate the performance of the proposed techniques a series of experiments was conducted. After analyzing the results, it can be said that the solutions provided by both approaches are adequate from the viewpoint of the quality as well as the computational effort involved in their generation. Nevertheless, a further refinement of the proposed procedures should be implemented with the aim of facilitating a quasi-automatic definition of the solution parameters.

  8. Adaptive time stepping for fluid-structure interaction solvers

    DOE PAGES

    Mayr, M.; Wall, W. A.; Gee, M. W.

    2017-12-22

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  9. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less

  10. Adaptive time stepping for fluid-structure interaction solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayr, M.; Wall, W. A.; Gee, M. W.

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  11. Low-cost approaches to problem-driven hydrologic research: The case of Arkavathy watershed, India.

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Ballukraya, P. N.; Jeremiah, K.; R, A.

    2014-12-01

    Groundwater depletion is a major problem in the Arkavathy Basin and it is the probable cause of declining flows in the Arkavathy River. However, investigating groundwater trends and groundwater-surface water linkages is extremely challenging in a data-scarce environment where basins are largely ungauged so there is very little historical data; often the data are missing, flawed or biased. Moreover, hard-rock aquifer data are very difficult to interpret. In the absence of reliable data, establishing a trend let alone the causal linkages is a severe challenge. We used a combination of low-cost, participatory, satellite based and conventional data collection methods to maximize spatial and temporal coverage of data. For instance, long-term groundwater trends are biased because only a few dug wells with non-representative geological conditions still have water - the vast majority of the monitoring wells drilled in the 1970s and 1980s have dried up. Instead, we relied on "barefoot hydrology" techniques. By conducting a comprehensive well census, engaging farmers in participatory groundwater monitoring and using locally available commercial borewell scanning techniques we have been able to better establish groundwater trends and spatial patterns.

  12. A stabilized element-based finite volume method for poroelastic problems

    NASA Astrophysics Data System (ADS)

    Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo

    2018-07-01

    The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.

  13. An inventory-theory-based interval-parameter two-stage stochastic programming model for water resources management

    NASA Astrophysics Data System (ADS)

    Suo, M. Q.; Li, Y. P.; Huang, G. H.

    2011-09-01

    In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.

  14. Master plan for REIS implementation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knobloch, P.C.

    1974-08-01

    Implementation requirements of the regional energy information system (REIS) and provision of a brief cost/benefit analysis of the proposed system are discussed. Divided into four sectors (problems, requirements, the present system, and the proposed implementation of REIS), the development of a demonstration data base, its implementation and that of the regional input-output model as a tool for decision makers are subjects of the report. The accounting subsystem and energy flow network model are two main components; the need to identify specific problems, to gather information on source, energy type, location, use, time with cross classification, the structure of REIS withmore » parameter subsystem, and a description of the study area (N. E. Minnesota) are included. Five energy-producing and 76 energy-using sectors are specified, with energy classification and forms included. (GRA)« less

  15. The Future of Healthcare Informatics: It Is Not What You Think

    PubMed Central

    2012-01-01

    Electronic health records (EHRs) offer many valuable benefits for patient safety, but it becomes apparent that the effective application of healthcare informatics creates problems and unintended consequences. One problem that seems particularly challenging is integration. Painfully missing are low-cost, easy to implement, plug-and-play, nonintrusive integration solutions—healthcare's “killer app.” Why is this? We must stop confusing application integration with information integration. Our goal must be to communicate data (ie, integrate information), not to integrate application functionality via complex and expensive application program interfaces (APIs). Communicating data simply requires a loosely coupled flow of data, as occurs today via email. In contrast, integration is a chief information officer's nightmare. Integrating applications, when we just wanted a bit of information, is akin to killing a gnat with a brick. PMID:24278826

  16. Theoretical analysis of multiphase flow during oil-well drilling by a conservative model

    NASA Astrophysics Data System (ADS)

    Nicolas-Lopez, Ruben

    2005-11-01

    In order to decrease cost and improve drilling operations is necessary a better understood of the flow mechanisms. Therefore, it was carried out a multiphase conservative model that includes three mass equations and a momentum equation. Also, the measured geothermal gradient is utilized by state equations for estimating physical properties of the phases flowing. The mathematical model is solved by numerical conservative schemes. It is used to analyze the interaction among solid-liquid-gas phases. The circulating system consists as follow, the circulating fluid is pumped downward into the drilling pipe until the bottom of the open hole then it flows through the drill bit, and at this point formation cuttings are incorporated to the circulating fluid and carried upward to the surface. The mixture returns up to the surface by an annular flow area. The real operational conditions are fed to conservative model and the results are matched up to field measurements in several oil wells. Mainly, flow rates, drilling rate, well and tool geometries are data to estimate the profiles of pressure, mixture density, equivalent circulating density, gas fraction and solid carrying capacity. Even though the problem is very complex, the model describes, properly, the hydrodynamics of drilling techniques applied at oil fields. *Authors want to thank to Instituto Mexicano del Petroleo and Petroleos Mexicanos for supporting this research.

  17. Evaluation of Algorithms for a Miles-in-Trail Decision Support Tool

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Hattaway, David; Bambos, Nicholas

    2012-01-01

    Four machine learning algorithms were prototyped and evaluated for use in a proposed decision support tool that would assist air traffic managers as they set Miles-in-Trail restrictions. The tool would display probabilities that each possible Miles-in-Trail value should be used in a given situation. The algorithms were evaluated with an expected Miles-in-Trail cost that assumes traffic managers set restrictions based on the tool-suggested probabilities. Basic Support Vector Machine, random forest, and decision tree algorithms were evaluated, as was a softmax regression algorithm that was modified to explicitly reduce the expected Miles-in-Trail cost. The algorithms were evaluated with data from the summer of 2011 for air traffic flows bound to the Newark Liberty International Airport (EWR) over the ARD, PENNS, and SHAFF fixes. The algorithms were provided with 18 input features that describe the weather at EWR, the runway configuration at EWR, the scheduled traffic demand at EWR and the fixes, and other traffic management initiatives in place at EWR. Features describing other traffic management initiatives at EWR and the weather at EWR achieved relatively high information gain scores, indicating that they are the most useful for estimating Miles-in-Trail. In spite of a high variance or over-fitting problem, the decision tree algorithm achieved the lowest expected Miles-in-Trail costs when the algorithms were evaluated using 10-fold cross validation with the summer 2011 data for these air traffic flows.

  18. A source-controlled data center network model.

    PubMed

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  19. A source-controlled data center network model

    PubMed Central

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  20. A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows

    DOE PAGES

    Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...

    2015-03-11

    High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less

  1. Second-order polynomial model to solve the least-cost lumber grade mix problem

    Treesearch

    Urs Buehlmann; Xiaoqiu Zuo; R. Edward Thomas

    2010-01-01

    Material costs when cutting solid wood parts from hardwood lumber for secondary wood products manufacturing account for 20 to 50 percent of final product cost. These costs can be minimized by proper selection of the lumber quality used. The lumber quality selection problem is referred to as the least-cost lumber grade mix problem in the industry. The objective of this...

  2. A model problem for estimation of moving-film time relaxation at sudden change of boundary conditions

    NASA Astrophysics Data System (ADS)

    Smirnovsky, Alexander A.; Eliseeva, Viktoria O.

    2018-05-01

    The study of the film flow occurred under the influence of a gas slug flow is of definite interest in heat and mass transfer during the motion of a coolant in the second circuit of a nuclear water-water reactor. Thermohydraulic codes are usually used for analysis of the such problems in which the motion of the liquid film and the vapor is modeled on the basis of a one-dimensional balance equations. Due to a greater inertia of the liquid film motion, film flow parameters changes with a relaxation compared with gas flow. We consider a model problem of film flow under the influence of friction from gas slug flow neglecting such effects as wave formation, droplet breakage and deposition on the film surface, evaporation and condensation. Such a problem is analogous to the well-known problems of Couette and Stokes flows. An analytical solution has been obtained for laminar flow. Numerical RANS-based simulation of turbulent flow was performed using OpenFOAM. It is established that the relaxation process is almost self-similar. This fact opens a possibility of obtaining valuable correlations for the relaxation time.

  3. Modeling of information on the impact of mining exploitation on bridge objects in BIM

    NASA Astrophysics Data System (ADS)

    Bętkowski, Piotr

    2018-04-01

    The article discusses the advantages of BIM (Building Information Modeling) technology in the management of bridge infrastructure on mining areas. The article shows the problems with information flow in the case of bridge objects located on mining areas and the advantages of proper information management, e.g. the possibility of automatic monitoring of structures, improvement of safety, optimization of maintenance activities, cost reduction of damage removal and preventive actions, improvement of atmosphere for mining exploitation, improvement of the relationship between the manager of the bridge and the mine. Traditional model of managing bridge objects on mining areas has many disadvantages, which are discussed in this article. These disadvantages include among others: duplication of information about the object, lack of correlation in investments due to lack of information flow between bridge manager and mine, limited assessment possibilities of damage propagation on technical condition and construction resistance to mining influences.

  4. Videocystography with synchronous detrusor pressure and flow rate recordings.

    PubMed

    Arnold, E P; Brown, A D; Webster, J R

    1974-08-01

    The addition of pressure and flow rate recordings to conventional cystourethrography is relatively inexpensive in terms of cost and of radiologist's time, each investigation requiring approximately half an hour.The value of this investigation in males lies in assessing the severity and site of outlet obstruction, particularly when the prostate is not clinically enlarged. Its value in demonstrating detrusor instability in cases of obstruction and in patients with post-prostatectomy problems is discussed. It is essential to the adequate assessment of sphincter mechanisms in both males and females. The particular importance of this in the female lies in the poor results of routine surgery for incontinence where this is due to detrusor instability.Finally the importance in neurological patients of a urodynamic evaluation of continence mechanisms and voiding dysfunction, both as a preliminary assessment and as a guide to the efficacy of treatment, is outlined.Various criticisms of the technique are reviewed and appropriate rebuttals provided.

  5. Charge auditing from a nursing perspective.

    PubMed

    Obert, S J

    1990-01-01

    Many third-party payors, which include commercial health and auto insurance companies and workmen's compensation carriers, are requesting access to their clients' itemized patient statements and medical records for verifying accuracy of charges and documentation of services rendered. If even a portion of the payment is withheld until the audit is completed, slowing of cash flow results. A slow cash flow may ultimately have profound effects on the quality, or even availability, of patient care. Hospitals are finding it cost effective to have someone within their institution audit patient accounts and medical records to identify problem areas that may result in denial of payment. Nurses are being recruited to perform these audits because of their knowledge of documentation standards and patient account charging procedures. With this background, the nurse auditor is also able to assess educational needs of the nursing staff and work collaboratively with other departments to correct deficiencies.

  6. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the...) The cash flow table you submit must include historical data for: (1) Lease production subject to...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at...

  7. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the...) The cash flow table you submit must include historical data for: (1) Lease production subject to...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at...

  8. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... justification report? This report presents cash flow data for 12 qualifying months, using the format specified... having some production. (a) The cash flow table you submit must include historical data for: (1) Lease... allowable costs; and (5) Transportation and processing costs. (b) Do not include in your cash flow table the...

  9. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the...) The cash flow table you submit must include historical data for: (1) Lease production subject to...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at...

  10. 30 CFR 203.84 - What is in a net revenue and relief justification report?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Transportation and processing costs. (b) Do not include in your cash flow table the non-allowable costs listed at... cash flow data for 12 qualifying months, using the format specified in the “Guidelines for the... cash flow table you submit must include historical data for: (1) Lease production subject to royalty...

  11. Numerical analysis of laminar and turbulent incompressible flows using the finite element Fluid Dynamics Analysis Package (FIDAP)

    NASA Technical Reports Server (NTRS)

    Sohn, Jeong L.

    1988-01-01

    The purpose of the study is the evaluation of the numerical accuracy of FIDAP (Fluid Dynamics Analysis Package). Accordingly, four test problems in laminar and turbulent incompressible flows are selected and the computational results of these problems compared with other numerical solutions and/or experimental data. These problems include: (1) 2-D laminar flow inside a wall-driven cavity; (2) 2-D laminar flow over a backward-facing step; (3) 2-D turbulent flow over a backward-facing step; and (4) 2-D turbulent flow through a turn-around duct.

  12. Investigating Cost Implications of Incorporating Level III At-Home Testing into a Polysomnography Based Sleep Medicine Program Using Administrative Data.

    PubMed

    Stewart, Samuel Alan; Penz, Erika; Fenton, Mark; Skomro, Robert

    2017-01-01

    Obstructive sleep apnea is a common problem, requiring expensive in-lab polysomnography for proper diagnosis. Home monitoring can provide an alternative to in-lab testing for a subset of OSA patients. The objective of this project was to investigate the effect of incorporating home testing into an OSA program at a large, tertiary sleep disorders centre. The Sleep Disorders Centre in Saskatoon, Canada, has been incorporating at-home testing into their diagnostic pathways since 2006. Administrative data from 2007 to 2013 were extracted (10030 patients) and the flow of patients through the program was followed from diagnosis to treatment. Costs were estimated using 2014 pricing and were stratified by disease attributes and sensitivity analysis was applied. The overall costs per patient were $627.40, with $419.20 for at-home testing and $746.20 for in-lab testing. The cost of home management would rise to $515 if all negative tests were required to be confirmed by an in-lab PSG. Our review suggests that at-home testing can be cost-effective alternative to in-lab testing when applied to the correct population, specifically, those with a high pretest probability of obstructive sleep apnea and an absence of significant comorbidities.

  13. An approach to quantify sources, seasonal change, and biogeochemical processes affecting metal loading in streams: Facilitating decisions for remediation of mine drainage

    USGS Publications Warehouse

    Kimball, B.A.; Runkel, R.L.; Walton-Day, K.

    2010-01-01

    Historical mining has left complex problems in catchments throughout the world. Land managers are faced with making cost-effective plans to remediate mine influences. Remediation plans are facilitated by spatial mass-loading profiles that indicate the locations of metal mass-loading, seasonal changes, and the extent of biogeochemical processes. Field-scale experiments during both low- and high-flow conditions and time-series data over diel cycles illustrate how this can be accomplished. A low-flow experiment provided spatially detailed loading profiles to indicate where loading occurred. For example, SO42 - was principally derived from sources upstream from the study reach, but three principal locations also were important for SO42 - loading within the reach. During high-flow conditions, Lagrangian sampling provided data to interpret seasonal changes and indicated locations where snowmelt runoff flushed metals to the stream. Comparison of metal concentrations between the low- and high-flow experiments indicated substantial increases in metal loading at high flow, but little change in metal concentrations, showing that toxicity at the most downstream sampling site was not substantially greater during snowmelt runoff. During high-flow conditions, a detailed temporal sampling at fixed sites indicated that Zn concentration more than doubled during the diel cycle. Monitoring programs must account for diel variation to provide meaningful results. Mass-loading studies during different flow conditions and detailed time-series over diel cycles provide useful scientific support for stream management decisions.

  14. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  15. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  16. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  17. Transport of phase space densities through tetrahedral meshes using discrete flow mapping

    NASA Astrophysics Data System (ADS)

    Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor

    2017-01-01

    Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.

  18. Mechanisms and methods for biofouling prevention via aeration

    NASA Astrophysics Data System (ADS)

    Dickenson, Natasha; Henoch, Charles; Belden, Jesse

    2013-11-01

    Biofouling is a major problem for the Navy and marine industries, with significant economic and ecological consequences. Specifically, biofouling on immersed hull surfaces generates increased drag and thus requires increased fuel consumption to maintain speed. Considerable effort has been spent developing techniques to prevent and control biofouling, but with limited success. Control methods that have proven to be effective are costly, time consuming, or negatively affect the environment. Recently, aeration via bubble injection along submerged surfaces has been shown to achieve long-lasting antifouling effects, and is the only effective non-toxic method available. An understanding of the basic mechanisms by which bubble-induced flow impedes biofouling is lacking, but is essential for the design of large-scale systems. We present results from an experimental investigation of several bubble induced flow fields over an inclined plate with simultaneous measurements of the fluid velocity and bubble characteristics using Digital article Image Velocimetry and high speed digital video. Trajectories of representative larval organisms are also resolved and linked with the flow field measurements to determine the mechanisms responsible for biofouling prevention.

  19. RETROFITTING CONTROL FACILITIES FOR WET-WEATHER FLOW CONTROL

    EPA Science Inventory

    Available technologies were evaluated to demonstrate the feasibility and cost effectiveness of retrofitting existing facilities to handle wet-weather flow (WWF). Cost/benefit relationships were compared to construction of new conventional control and treatment facilities. Desktop...

  20. RETROFITTING CONTROL FACILITIES FOR WET WEATHER FLOW TREATMENT

    EPA Science Inventory

    Available technologies were evaluated to demonstrate the technical feasibility and cost-effectiveness of retrofitting existing facilities to handle wet-weather flow. Cost/benefit relationships were also compared to construction of new conventional control and treatment facilitie...

  1. RETROFITTING CONTROL FACILITIES FOR WET-WEATHER FLOW TREATMENT

    EPA Science Inventory

    Available technologies were evaluated to demonstrate the technical feasibility and cost effectiveness of retrofitting existing facilities to handle wet-weather flow. Cost/benefit relationships were also compared to construction of new conventional control and treatment facilities...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Xiaoliang; Xia, Gordon; Kirby, Brent W.

    Aiming to explore low-cost redox flow battery systems, a novel iron-polysulfide (Fe/S) flow battery has been demonstrated in a laboratory cell. This system employs alkali metal ferri/ferrocyanide and alkali metal polysulfides as the redox electrolytes. When proper electrodes, such as pretreated graphite felts, are used, 78% energy efficiency and 99% columbic efficiency are achieved. The remarkable advantages of this system over current state-of-the-art redox flow batteries include: 1) less corrosive and relatively environmentally benign redox solutions used; 2) excellent energy and utilization efficiencies; 3) low cost for redox electrolytes and cell components. These attributes can lead to significantly reduced capitalmore » cost and make the Fe/S flow battery system a promising low-cost energy storage technology. The major drawbacks of the present cell design are relatively low power density and possible sulfur species crossover. Further work is underway to address these concerns.« less

  3. Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    PubMed Central

    Guzmán, Pablo; Díaz, Javier; Agís, Rodrigo; Ros, Eduardo

    2010-01-01

    The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains. PMID:22319283

  4. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  5. Smart Screening System (S3) In Taconite Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daryoush Allaei; Angus Morison; David Tarnowski

    2005-09-01

    The conventional screening machines used in processing plants have had undesirable high noise and vibration levels. They also have had unsatisfactorily low screening efficiency, high energy consumption, high maintenance cost, low productivity, and poor worker safety. These conventional vibrating machines have been used in almost every processing plant. Most of the current material separation technology uses heavy and inefficient electric motors with an unbalanced rotating mass to generate the shaking. In addition to being excessively noisy, inefficient, and high-maintenance, these vibrating machines are often the bottleneck in the entire process. Furthermore, these motors, along with the vibrating machines and supportingmore » structure, shake other machines and structures in the vicinity. The latter increases maintenance costs while reducing worker health and safety. The conventional vibrating fine screens at taconite processing plants have had the same problems as those listed above. This has resulted in lower screening efficiency, higher energy and maintenance cost, and lower productivity and workers safety concerns. The focus of this work is on the design of a high performance screening machine suitable for taconite processing plants. SmartScreens{trademark} technology uses miniaturized motors, based on smart materials, to generate the shaking. The underlying technologies are Energy Flow Control{trademark} and Vibration Control by Confinement{trademark}. These concepts are used to direct energy flow and confine energy efficiently and effectively to the screen function. The SmartScreens{trademark} technology addresses problems related to noise and vibration, screening efficiency, productivity, and maintenance cost and worker safety. Successful development of SmartScreens{trademark} technology will bring drastic changes to the screening and physical separation industry. The final designs for key components of the SmartScreens{trademark} have been developed. The key components include smart motor and associated electronics, resonators, and supporting structural elements. It is shown that the smart motors have an acceptable life and performance. Resonator (or motion amplifier) designs are selected based on the final system requirement and vibration characteristics. All the components for a fully functional prototype are fabricated. The development program is on schedule. The last semi-annual report described the process of FE model validation and correlation with experimental data in terms of dynamic performance and predicted stresses. It also detailed efforts into making the supporting structure less important to system performance. Finally, an introduction into the dry application concept was presented. Since then, the design refinement phase was completed. This has resulted in a Smart Screen design that meets performance targets both in the dry condition and with taconite slurry flow using PZT motors. Furthermore, this system was successfully demonstrated for the DOE and partner companies at the Coleraine Mineral Research Laboratory in Coleraine, Minnesota.« less

  6. "Student Lab"-on-a-Chip: Integrating Low-Cost Microfluidics into Undergraduate Teaching Labs to Study Multiphase Flow Phenomena in Small Vessels

    ERIC Educational Resources Information Center

    Young, Edmond W. K.; Simmons, Craig A.

    2009-01-01

    We describe a simple, low-cost laboratory session to demonstrate the Fahraeus-Lindqvist effect, a microphase flow phenomenon that occurs in small blood vessels and alters the effective rheological properties of blood. The experiments are performed by flowing cells through microchannels fabricated by soft lithography and characterization of cell…

  7. Optic flow informs distance but not profitability for honeybees.

    PubMed

    Shafir, Sharoni; Barron, Andrew B

    2010-04-22

    How do flying insects monitor foraging efficiency? Honeybees (Apis mellifera) use optic flow information as an odometer to estimate distance travelled, but here we tested whether optic flow informs estimation of foraging costs also. Bees were trained to feeders in flight tunnels such that bees experienced the greatest optic flow en route to the feeder closest to the hive. Analyses of dance communication showed that, as expected, bees indicated the close feeder as being further, but they also indicated this feeder as the more profitable, and preferentially visited this feeder when given a choice. We show that honeybee estimates of foraging cost are not reliant on optic flow information. Rather, bees can assess distance and profitability independently and signal these aspects as separate elements of their dances. The optic flow signal is sensitive to the nature of the environment travelled by the bee, and is therefore not a good index of flight energetic costs, but it provides a good indication of distance travelled for purpose of navigation and communication, as long as the dancer and recruit travel similar routes. This study suggests an adaptive dual processing system in honeybees for communicating and navigating distance flown and for evaluating its energetic costs.

  8. Optic flow informs distance but not profitability for honeybees

    PubMed Central

    Shafir, Sharoni; Barron, Andrew B.

    2010-01-01

    How do flying insects monitor foraging efficiency? Honeybees (Apis mellifera) use optic flow information as an odometer to estimate distance travelled, but here we tested whether optic flow informs estimation of foraging costs also. Bees were trained to feeders in flight tunnels such that bees experienced the greatest optic flow en route to the feeder closest to the hive. Analyses of dance communication showed that, as expected, bees indicated the close feeder as being further, but they also indicated this feeder as the more profitable, and preferentially visited this feeder when given a choice. We show that honeybee estimates of foraging cost are not reliant on optic flow information. Rather, bees can assess distance and profitability independently and signal these aspects as separate elements of their dances. The optic flow signal is sensitive to the nature of the environment travelled by the bee, and is therefore not a good index of flight energetic costs, but it provides a good indication of distance travelled for purpose of navigation and communication, as long as the dancer and recruit travel similar routes. This study suggests an adaptive dual processing system in honeybees for communicating and navigating distance flown and for evaluating its energetic costs. PMID:20018787

  9. Applications of statistical physics to technology price evolution

    NASA Astrophysics Data System (ADS)

    McNerney, James

    Understanding how changing technology affects the prices of goods is a problem with both rich phenomenology and important policy consequences. Using methods from statistical physics, I model technology-driven price evolution. First, I examine a model for the price evolution of individual technologies. The price of a good often follows a power law equation when plotted against its cumulative production. This observation turns out to have significant consequences for technology policy aimed at mitigating climate change, where technologies are needed that achieve low carbon emissions at low cost. However, no theory adequately explains why technology prices follow power laws. To understand this behavior, I simplify an existing model that treats technologies as machines composed of interacting components. I find that the power law exponent of the price trajectory is inversely related to the number of interactions per component. I extend the model to allow for more realistic component interactions and make a testable prediction. Next, I conduct a case-study on the cost evolution of coal-fired electricity. I derive the cost in terms of various physical and economic components. The results suggest that commodities and technologies fall into distinct classes of price models, with commodities following martingales, and technologies following exponentials in time or power laws in cumulative production. I then examine the network of money flows between industries. This work is a precursor to studying the simultaneous evolution of multiple technologies. Economies resemble large machines, with different industries acting as interacting components with specialized functions. To begin studying the structure of these machines, I examine 20 economies with an emphasis on finding common features to serve as targets for statistical physics models. I find they share the same money flow and industry size distributions. I apply methods from statistical physics to show that industries cluster the same way according to industry type. Finally, I use these industry money flows to model the price evolution of many goods simultaneously, where network effects become important. I derive a prediction for which goods tend to improve most rapidly. The fastest-improving goods are those with the highest mean path lengths in the money flow network.

  10. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    NASA Astrophysics Data System (ADS)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.

  11. Dual-scale Galerkin methods for Darcy flow

    NASA Astrophysics Data System (ADS)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  12. Operations management tools to be applied for textile

    NASA Astrophysics Data System (ADS)

    Maralcan, A.; Ilhan, I.

    2017-10-01

    In this paper, basic concepts of process analysis such as flow time, inventory, bottleneck, labour cost and utilization are illustrated first. The effect of bottleneck on the results of a business are especially emphasized. In the next section, tools on productivity measurement; KPI (Key Performance Indicators) Tree, OEE (Overall Equipment Effectiveness) and Takt Time are introduced and exemplified. KPI tree is a diagram on which we can visualize all the variables of an operation which are driving financial results through cost and profit. OEE is a tool to measure a potential extra capacity of an equipment or an employee. Takt time is a tool to determine the process flow rate according to the customer demand. KPI tree is studied through the whole process while OEE is exemplified for a stenter frame machine which is the most important machine (and usually the bottleneck) and the most expensive investment in a finishing plant. Takt time is exemplified for the quality control department. Finally quality tools, six sigma, control charts and jidoka are introduced. Six sigma is a tool to measure process capability and by the way probability of a defect. Control chart is a powerful tool to monitor the process. The idea of jidoka (detect, stop and alert) is about alerting the people that there is a problem in the process.

  13. Minimal Residual Disease Evaluation in Childhood Acute Lymphoblastic Leukemia: An Economic Analysis

    PubMed Central

    Gajic-Veljanoski, O.; Pham, B.; Pechlivanoglou, P.; Krahn, M.; Higgins, Caroline; Bielecki, Joanna

    2016-01-01

    Background Minimal residual disease (MRD) testing by higher performance techniques such as flow cytometry and polymerase chain reaction (PCR) can be used to detect the proportion of remaining leukemic cells in bone marrow or peripheral blood during and after the first phases of chemotherapy in children with acute lymphoblastic leukemia (ALL). The results of MRD testing are used to reclassify these patients and guide changes in treatment according to their future risk of relapse. We conducted a systematic review of the economic literature, cost-effectiveness analysis, and budget-impact analysis to ascertain the cost-effectiveness and economic impact of MRD testing by flow cytometry for management of childhood precursor B-cell ALL in Ontario. Methods A systematic literature search (1998–2014) identified studies that examined the incremental cost-effectiveness of MRD testing by either flow cytometry or PCR. We developed a lifetime state-transition (Markov) microsimulation model to quantify the cost-effectiveness of MRD testing followed by risk-directed therapy to no MRD testing and to estimate its marginal effect on health outcomes and on costs. Model input parameters were based on the literature, expert opinion, and data from the Pediatric Oncology Group of Ontario Networked Information System. Using predictions from our Markov model, we estimated the 1-year cost burden of MRD testing versus no testing and forecasted its economic impact over 3 and 5 years. Results In a base-case cost-effectiveness analysis, compared with no testing, MRD testing by flow cytometry at the end of induction and consolidation was associated with an increased discounted survival of 0.0958 quality-adjusted life-years (QALYs) and increased discounted costs of $4,180, yielding an incremental cost-effectiveness ratio (ICER) of $43,613/QALY gained. After accounting for parameter uncertainty, incremental cost-effectiveness of MRD testing was associated with an ICER of $50,249/QALY gained. In the budget-impact analysis, the 1-year cost expenditure for MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL was estimated at $340,760. We forecasted that the province would have to pay approximately $1.3 million over 3 years and $2.4 million over 5 years for MRD testing by flow cytometry in this population. Conclusions Compared with no testing, MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL represents good value for money at commonly used willingness-to-pay thresholds of $50,000/QALY and $100,000/QALY. PMID:27099644

  14. Minimal Residual Disease Evaluation in Childhood Acute Lymphoblastic Leukemia: An Economic Analysis.

    PubMed

    2016-01-01

    Minimal residual disease (MRD) testing by higher performance techniques such as flow cytometry and polymerase chain reaction (PCR) can be used to detect the proportion of remaining leukemic cells in bone marrow or peripheral blood during and after the first phases of chemotherapy in children with acute lymphoblastic leukemia (ALL). The results of MRD testing are used to reclassify these patients and guide changes in treatment according to their future risk of relapse. We conducted a systematic review of the economic literature, cost-effectiveness analysis, and budget-impact analysis to ascertain the cost-effectiveness and economic impact of MRD testing by flow cytometry for management of childhood precursor B-cell ALL in Ontario. A systematic literature search (1998-2014) identified studies that examined the incremental cost-effectiveness of MRD testing by either flow cytometry or PCR. We developed a lifetime state-transition (Markov) microsimulation model to quantify the cost-effectiveness of MRD testing followed by risk-directed therapy to no MRD testing and to estimate its marginal effect on health outcomes and on costs. Model input parameters were based on the literature, expert opinion, and data from the Pediatric Oncology Group of Ontario Networked Information System. Using predictions from our Markov model, we estimated the 1-year cost burden of MRD testing versus no testing and forecasted its economic impact over 3 and 5 years. In a base-case cost-effectiveness analysis, compared with no testing, MRD testing by flow cytometry at the end of induction and consolidation was associated with an increased discounted survival of 0.0958 quality-adjusted life-years (QALYs) and increased discounted costs of $4,180, yielding an incremental cost-effectiveness ratio (ICER) of $43,613/QALY gained. After accounting for parameter uncertainty, incremental cost-effectiveness of MRD testing was associated with an ICER of $50,249/QALY gained. In the budget-impact analysis, the 1-year cost expenditure for MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL was estimated at $340,760. We forecasted that the province would have to pay approximately $1.3 million over 3 years and $2.4 million over 5 years for MRD testing by flow cytometry in this population. Compared with no testing, MRD testing by flow cytometry in newly diagnosed patients with precursor B-cell ALL represents good value for money at commonly used willingness-to-pay thresholds of $50,000/QALY and $100,000/QALY.

  15. Lattice Boltzmann computation of creeping fluid flow in roll-coating applications

    NASA Astrophysics Data System (ADS)

    Rajan, Isac; Kesana, Balashanker; Perumal, D. Arumuga

    2018-04-01

    Lattice Boltzmann Method (LBM) has advanced as a class of Computational Fluid Dynamics (CFD) methods used to solve complex fluid systems and heat transfer problems. It has ever-increasingly attracted the interest of researchers in computational physics to solve challenging problems of industrial and academic importance. In this current study, LBM is applied to simulate the creeping fluid flow phenomena commonly encountered in manufacturing technologies. In particular, we apply this novel method to simulate the fluid flow phenomena associated with the "meniscus roll coating" application. This prevalent industrial problem encountered in polymer processing and thin film coating applications is modelled as standard lid-driven cavity problem to which creeping flow analysis is applied. This incompressible viscous flow problem is studied in various speed ratios, the ratio of upper to lower lid speed in two different configurations of lid movement - parallel and anti-parallel wall motion. The flow exhibits interesting patterns which will help in design of roll coaters.

  16. Development of a subsurface gas flow probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, R.P.; Ballard, S.; Barker, G.T.

    1997-04-01

    This report describes a project to develop a flow probe to monitor gas movement in the vadose zone due to passive venting or active remediation efforts such as soil vapor extraction. 3-D and 1-D probes were designed, fabricated, tested in known flow fields under laboratory conditions, and field tested. The 3-D pores were based on technology developed for ground water flow monitoring. The probes gave excellent agreement with measured air velocities in the laboratory tests. Data processing software developed for ground water flow probes was modified for use with air flow, and to accommodate various probe designs. Modifications were mademore » to decrease the cost of the probes, including developing a downhole multiplexer. Modeling indicated problems with flow channeling due to the mode of deployment. Additional testing was conducted and modifications were made to the probe and to the deployment methods. The probes were deployed at three test sites: a large outdoor test tank, a brief vapor extraction test at the Chemical Waste landfill, and at an active remediation site at a local gas station. The data from the field tests varied markedly from the laboratory test data. All of the major events such as vapor extraction system turn on and turn off, as well as changes in the flow rate, could be seen in the data. However, there were long term trends in the data which were much larger than the velocity signals, which made it difficult to determine accurate air velocities. These long term trends may be due to changes in soil moisture content and seasonal ground temperature variations.« less

  17. Beamed Energy and the Economics of Space Based Solar Power

    NASA Astrophysics Data System (ADS)

    Keith Henson, H.

    2010-05-01

    For space based solar power to replace fossil fuel, it must sell for 1-2 cents per kWh. To reach this sales price requires a launch cost to GEO of ˜100/kg. Proposed to reach this cost figure at 100 tonne/hour are two stages to GEO where a Skylon-rocket-plane first stage provides five km/sec and a laser stage provides 6.64 km/sec. The combination appears to reduce the cost to GEO to under 100/kg at a materials flow rate of ˜1 million tonnes per year, enough to initially construct 200 GW per year of power satellites. An extended Pro Forma business case indicates that peak investment to profitability might be ˜65 B. Over a 25-year period, production rises to two TW per year to undercut and replace most other sources of energy. Energy on this scale solves other supply problems such as water and liquid fuels. It could even allow removal of CO2 from the air and storage of carbon as synthetic oil in empty oil fields.

  18. Development of cost-effective surfactant flooding technology, Quarterly report, October 1995--December 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Sepehrnoori, K.

    1995-12-31

    The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less

  19. Low cost hydrogen/novel membrane technology for hydrogen separation from synthesis gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-02-01

    To make the coal-to-hydrogen route economically attractive, improvements are being sought in each step of the process: coal gasification, water-carbon monoxide shift reaction, and hydrogen separation. This report addresses the use of membranes in the hydrogen separation step. The separation of hydrogen from synthesis gas is a major cost element in the manufacture of hydrogen from coal. Separation by membranes is an attractive, new, and still largely unexplored approach to the problem. Membrane processes are inherently simple and efficient and often have lower capital and operating costs than conventional processes. In this report current ad future trends in hydrogen productionmore » and use are first summarized. Methods of producing hydrogen from coal are then discussed, with particular emphasis on the Texaco entrained flow gasifier and on current methods of separating hydrogen from this gas stream. The potential for membrane separations in the process is then examined. In particular, the use of membranes for H{sub 2}/CO{sub 2}, H{sub 2}/CO, and H{sub 2}/N{sub 2} separations is discussed. 43 refs., 14 figs., 6 tabs.« less

  20. NASA/MSFC's Calculation for Test Case 1a of ATAC-FSDC Workshop on After-body and Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    2006-01-01

    Mr. Ruf of NASA/MSFC executed the CHEM computational fluid dynamics (CFD) code to provide a prediction of the test case 1 a for the ATAC-FSDC Workshop on After-body and Nozzle Flows. CHEM is used extensively at MSFC for a wide variety of fluid dynamic problems. These problems include; injector element flows, nozzle flows, feed line flows, turbomachinery flows, solid rocket motor internal flows, plume vehicle flow interactions, etc.

  1. Mathematical modeling of swirled flows in industrial applications

    NASA Astrophysics Data System (ADS)

    Dekterev, A. A.; Gavrilov, A. A.; Sentyabov, A. V.

    2018-03-01

    Swirled flows are widely used in technological devices. Swirling flows are characterized by a wide range of flow regimes. 3D mathematical modeling of flows is widely used in research and design. For correct mathematical modeling of such a flow, it is necessary to use turbulence models, which take into account important features of the flow. Based on the experience of computational modeling of a wide class of problems with swirling flows, recommendations on the use of turbulence models for calculating the applied problems are proposed.

  2. Analytical Study on Thermal and Mechanical Design of Printed Circuit Heat Exchanger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Su-Jong; Sabharwall, Piyush; Kim, Eung-Soo

    2013-09-01

    The analytical methodologies for the thermal design, mechanical design and cost estimation of printed circuit heat exchanger are presented in this study. In this study, three flow arrangements of parallel flow, countercurrent flow and crossflow are taken into account. For each flow arrangement, the analytical solution of temperature profile of heat exchanger is introduced. The size and cost of printed circuit heat exchangers for advanced small modular reactors, which employ various coolants such as sodium, molten salts, helium, and water, are also presented.

  3. Contractors perspective for critical factors of cost overrun in highway projects of Sindh, Pakistan

    NASA Astrophysics Data System (ADS)

    Sohu, Samiullah; Abdullah, Abd Halid; Nagapan, Sasitharan; Fattah, Abdul; Ullah, Kaleem; Kumar, Kanesh

    2017-10-01

    Construction industry of Pakistan is creating a number of opportunities in employment as well as plays a role model for economy development of the country. This construction industry has a serious issue of cost overrun in all construction projects especially in construction of highway projects. Cost overrun is a serious and critical issue in construction of highway projects which gives negative impact to construction practitioners because it is not only cross the approved budget but also approved time of the project. The main objective of this study is to find out critical factors causing cost overrun in highway projects of Sindh according to contractors' perspectives. Deep literature review was carried out and a total of 64 factors of cost overrun were identified. To achieve the objective, a questionnaire was designed and distributed among 16 selected respondents who have more than 20 years of experience in construction of highway projects. The results from analysis found that most critical factors of cost overrun in the order of importance include financial and cash flow difficulties faced by contractor, frequent changes in design, changes in price of materials, poor planning by client, change in scope of project, change in specification of materials and delay in taking decisions. This study will assist contractors to narrow down some of the critical factors that would lead to cost overrun, and therefore be prepared with the ways to mitigate these problems in construction of highway projects of Sindh province.

  4. Flow range enhancement by secondary flow effect in low solidity circular cascade diffusers

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Daisaku; Tun, Min Thaw; Mizokoshi, Kanata; Kishikawa, Daiki

    2014-08-01

    High-pressure ratio and wide operating range are highly required for compressors and blowers. The technical issue of the design is achievement of suppression of flow separation at small flow rate without deteriorating the efficiency at design flow rate. A numerical simulation is very effective in design procedure, however, cost of the numerical simulation is generally high during the practical design process, and it is difficult to confirm the optimal design which is combined with many parameters. A multi-objective optimization technique is the idea that has been proposed for solving the problem in practical design process. In this study, a Low Solidity circular cascade Diffuser (LSD) in a centrifugal blower is successfully designed by means of multi-objective optimization technique. An optimization code with a meta-model assisted evolutionary algorithm is used with a commercial CFD code ANSYS-CFX. The optimization is aiming at improving the static pressure coefficient at design point and at low flow rate condition while constraining the slope of the lift coefficient curve. Moreover, a small tip clearance of the LSD blade was applied in order to activate and to stabilize the secondary flow effect at small flow rate condition. The optimized LSD blade has an extended operating range of 114 % towards smaller flow rate as compared to the baseline design without deteriorating the diffuser pressure recovery at design point. The diffuser pressure rise and operating flow range of the optimized LSD blade are experimentally verified by overall performance test. The detailed flow in the diffuser is also confirmed by means of a Particle Image Velocimeter. Secondary flow is clearly captured by PIV and it spreads to the whole area of LSD blade pitch. It is found that the optimized LSD blade shows good improvement of the blade loading in the whole operating range, while at small flow rate the flow separation on the LSD blade has been successfully suppressed by the secondary flow effect.

  5. Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model

    NASA Astrophysics Data System (ADS)

    Yang, Yuefang; Gan, Chunhui; Shen, Tingting

    2017-05-01

    In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.

  6. Finite element flow analysis; Proceedings of the Fourth International Symposium on Finite Element Methods in Flow Problems, Chuo University, Tokyo, Japan, July 26-29, 1982

    NASA Astrophysics Data System (ADS)

    Kawai, T.

    Among the topics discussed are the application of FEM to nonlinear free surface flow, Navier-Stokes shallow water wave equations, incompressible viscous flows and weather prediction, the mathematical analysis and characteristics of FEM, penalty function FEM, convective, viscous, and high Reynolds number FEM analyses, the solution of time-dependent, three-dimensional and incompressible Navier-Stokes equations, turbulent boundary layer flow, FEM modeling of environmental problems over complex terrain, and FEM's application to thermal convection problems and to the flow of polymeric materials in injection molding processes. Also covered are FEMs for compressible flows, including boundary layer flows and transonic flows, hybrid element approaches for wave hydrodynamic loadings, FEM acoustic field analyses, and FEM treatment of free surface flow, shallow water flow, seepage flow, and sediment transport. Boundary element methods and FEM computational technique topics are also discussed. For individual items see A84-25834 to A84-25896

  7. Towards a generalized computational fluid dynamics technique for all Mach numbers

    NASA Technical Reports Server (NTRS)

    Walters, R. W.; Slack, D. C.; Godfrey, A. G.

    1993-01-01

    Currently there exists no single unified approach for efficiently and accurately solving computational fluid dynamics (CFD) problems across the Mach number regime, from truly low speed incompressible flows to hypersonic speeds. There are several CFD codes that have evolved into sophisticated prediction tools with a wide variety of features including multiblock capabilities, generalized chemistry and thermodynamics models among other features. However, as these codes evolve, the demand placed on the end user also increases simply because of the myriad of features that are incorporated into these codes. In order for a user to be able to solve a wide range of problems, several codes may be needed requiring the user to be familiar with the intricacies of each code and their rather complicated input files. Moreover, the cost of training users and maintaining several codes becomes prohibitive. The objective of the current work is to extend the compressible, characteristic-based, thermochemical nonequilibrium Navier-Stokes code GASP to very low speed flows and simultaneously improve convergence at all speeds. Before this work began, the practical speed range of GASP was Mach numbers on the order of 0.1 and higher. In addition, a number of new techniques have been developed for more accurate physical and numerical modeling. The primary focus has been on the development of optimal preconditioning techniques for the Euler and the Navier-Stokes equations with general finite-rate chemistry models and both equilibrium and nonequilibrium thermodynamics models. We began with the work of Van Leer, Lee, and Roe for inviscid, one-dimensional perfect gases and extended their approach to include three-dimensional reacting flows. The basic steps required to accomplish this task were a transformation to stream-aligned coordinates, the formulation of the preconditioning matrix, incorporation into both explicit and implicit temporal integration schemes, and modification of the numerical flux formulae. In addition, we improved the convergence rate of the implicit time integration schemes in GASP through the use of inner iteration strategies and the use of the GMRES (General Minimized Resisual) which belongs to the class of algorithms referred to as Krylov subspace iteration. Finally, we significantly improved the practical utility of GASP through the addition of mesh sequencing, a technique in which computations begin on a coarse grid and get interpolated onto successively finer grids. The fluid dynamic problems of interest to the propulsion community involve complex flow physics spanning different velocity regimes and possibly involving chemical reactions. This class of problems results in widely disparate time scales causing numerical stiffness. Even in the absence of chemical reactions, eigenvalue stiffness manifests itself at transonic and very low speed flows which can be quantified by the large condition number of the system and evidenced by slow convergence rates. This results in the need for thorough numerical analysis and subsequent implementation of sophisticated numerical techniques for these difficult yet practical problems. As a result of this work, we have been able to extend the range of applicability of compressible codes to very low speed inviscid flows (M = .001) and reacting flows.

  8. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  9. LSA Large Area Silicon Sheet Task Continuous Czochralski Process Development

    NASA Technical Reports Server (NTRS)

    Rea, S. N.

    1979-01-01

    A commercial Czochralski crystal growing furnace was converted to a continuous growth facility by installation of a small, in-situ premelter with attendant silicon storage and transport mechanisms. Using a vertical, cylindrical graphite heater containing a small fused quartz test tube linear from which the molten silicon flowed out the bottom, approximately 83 cm of nominal 5 cm diamter crystal was grown with continuous melt addition furnished by the test tube premelter. High perfection crystal was not obtained, however, due primarily to particulate contamination of the melt. A major contributor to the particulate problem was severe silicon oxide buildup on the premelter which would ultimately drop into the primary melt. Elimination of this oxide buildup will require extensive study and experimentation and the ultimate success of continuous Czochralski depends on a successful solution to this problem. Economically, the continuous Czochralski meets near-term cost goals for silicon sheet material.

  10. Development of an X-ray imaging system to prevent scintillator degradation for white synchrotron radiation.

    PubMed

    Zhou, Tunhe; Wang, Hongchang; Connolley, Thomas; Scott, Steward; Baker, Nick; Sawhney, Kawal

    2018-05-01

    The high flux of the white X-ray beams from third-generation synchrotron light sources can significantly benefit the development of high-speed X-ray imaging, but can also bring technical challenges to existing X-ray imaging systems. One prevalent problem is that the image quality deteriorates because of dust particles accumulating on the scintillator screen during exposure to intense X-ray radiation. Here, this problem has been solved by embedding the scintillator in a flowing inert-gas environment. It is also shown that the detector maintains the quality of the captured images even after days of X-ray exposure. This modification is cost-efficient and easy to implement. Representative examples of applications using the X-ray imaging system are also provided, including fast tomography and multimodal phase-contrast imaging for biomedical and geological samples. open access.

  11. A Two-moment Radiation Hydrodynamics Module in ATHENA Using a Godunov Method

    NASA Astrophysics Data System (ADS)

    Skinner, M. A.; Ostriker, E. C.

    2013-04-01

    We describe a module for the Athena code that solves the grey equations of radiation hydrodynamics (RHD) using a local variable Eddington tensor (VET) based on the M1 closure of the two-moment hierarchy of the transfer equation. The variables are updated via a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. The streaming and diffusion limits are well-described by the M1 closure model, and our implementation shows excellent behavior for problems containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly-varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal.

  12. An Adjoint-Based Approach to Study a Flexible Flapping Wing in Pitching-Rolling Motion

    NASA Astrophysics Data System (ADS)

    Jia, Kun; Wei, Mingjun; Xu, Min; Li, Chengyu; Dong, Haibo

    2017-11-01

    Flapping-wing aerodynamics, with advantages in agility, efficiency, and hovering capability, has been the choice of many flyers in nature. However, the study of bio-inspired flapping-wing propulsion is often hindered by the problem's large control space with different wing kinematics and deformation. The adjoint-based approach reduces largely the computational cost to a feasible level by solving an inverse problem. Facing the complication from moving boundaries, non-cylindrical calculus provides an easy extension of traditional adjoint-based approach to handle the optimization involving moving boundaries. The improved adjoint method with non-cylindrical calculus for boundary treatment is first applied on a rigid pitching-rolling plate, then extended to a flexible one with active deformation to further increase its propulsion efficiency. The comparison of flow dynamics with the initial and optimal kinematics and deformation provides a unique opportunity to understand the flapping-wing mechanism. Supported by AFOSR and ARL.

  13. On nonlinear finite element analysis in single-, multi- and parallel-processors

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R.; Islam, M.; Salama, M.

    1982-01-01

    Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.

  14. A normally-closed piezoelectric micro-valve with flexible stopper

    NASA Astrophysics Data System (ADS)

    Chen, Song; Lu, Song; Liu, Yong; Wang, Jiantao; Tian, Xiaochao; Liu, Guojun; Yang, Zhigang

    2016-04-01

    In the field of controlled drug delivery system, there are still many problems on those reported micro-valves, such as the small opening height, unsatisfactory particle tolerance and high cost. To solve the above problems, a novel normally-closed piezoelectric micro-valve is presented in this paper. The micro-valve was driven by circular unimorph piezoelectric vibrator and natural rubber membrane with high elasticity was used as the valve stopper. The small axial displacement of piezoelectric vibrator can be converted into a large stroke of valve stopper based on hydraulic amplification mechanism. The experiment indicates that maximum hydraulic amplification ratio is up to 14, and the cut-off pressure of the micro-valve is 39kPa in the case of no working voltage. The presented micro valve has a large flow control range (ranging from 0 to 8.75mL/min).

  15. Development of an X-ray imaging system to prevent scintillator degradation for white synchrotron radiation

    PubMed Central

    Zhou, Tunhe; Wang, Hongchang; Scott, Steward

    2018-01-01

    The high flux of the white X-ray beams from third-generation synchrotron light sources can significantly benefit the development of high-speed X-ray imaging, but can also bring technical challenges to existing X-ray imaging systems. One prevalent problem is that the image quality deteriorates because of dust particles accumulating on the scintillator screen during exposure to intense X-ray radiation. Here, this problem has been solved by embedding the scintillator in a flowing inert-gas environment. It is also shown that the detector maintains the quality of the captured images even after days of X-ray exposure. This modification is cost-efficient and easy to implement. Representative examples of applications using the X-ray imaging system are also provided, including fast tomography and multimodal phase-contrast imaging for biomedical and geological samples. PMID:29714191

  16. A RADIATION TRANSFER SOLVER FOR ATHENA USING SHORT CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Shane W.; Stone, James M.; Jiang Yanfei

    2012-03-01

    We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiationmore » MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.« less

  17. Simulation software: engineer processes before reengineering.

    PubMed

    Lepley, C J

    2001-01-01

    People make decisions all the time using intuition. But what happens when you are asked: "Are you sure your predictions are accurate? How much will a mistake cost? What are the risks associated with this change?" Once a new process is engineered, it is difficult to analyze what would have been different if other options had been chosen. Simulating a process can help senior clinical officers solve complex patient flow problems and avoid wasted efforts. Simulation software can give you the data you need to make decisions. The author introduces concepts, methodologies, and applications of computer aided simulation to illustrate their use in making decisions to improve workflow design.

  18. Viscous flow computations using a second-order upwind differencing scheme

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1988-01-01

    In the present computations of a wide range of fluid flow problems by means of the primitive variables-incorporating Navier-Stokes equations, a mixed second-order upwinding scheme approximates the convective terms of the transport equations and the scheme's accuracy is verified for convection-dominated high Re number flow problems. An adaptive dissipation scheme is used as a monotonic supersonic shock flow capture mechanism. Many benchmark fluid flow problems, including the compressible and incompressible, laminar and turbulent, over a wide range of M and Re numbers, are presently studied to verify the accuracy and robustness of this numerical method.

  19. On the theory of oscillating airfoils of finite span in subsonic compressible flow

    NASA Technical Reports Server (NTRS)

    Reissner, Eric

    1950-01-01

    The problem of oscillating lifting surface of finite span in subsonic compressible flow is reduced to an integral equation. The kernel of the integral equation is approximated by a simpler expression, on the basis of the assumption of sufficiently large aspect ratio. With this approximation the double integral occurring in the formulation of the problem is reduced to two single integrals, one of which is taken over the chord and the other over the span of the lifting surface. On the basis of this reduction the three-dimensional problem appears separated into two two-dimensional problems, one of them being effectively the problem of two-dimensional flow and the other being the problem of spanwise circulation distribution. Earlier results concerning the oscillating lifting surface of finite span in incompressible flow are contained in the present more general results.

  20. A study of pressure-based methodology for resonant flows in non-linear combustion instabilities

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.

    1992-01-01

    This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.

  1. Topology optimization of unsteady flow problems using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Nørgaard, Sebastian; Sigmund, Ole; Lazarov, Boyan

    2016-02-01

    This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems. The optimization problem is solved with a gradient based method, and the design sensitivities are computed by solving the discrete adjoint problem. For moderate Reynolds number flows, it is demonstrated that topology optimization can successfully account for unsteady effects such as vortex shedding and time-varying boundary conditions. Such effects are relevant in several engineering applications, i.e. fluid pumps and control valves.

  2. Adjoint optimization of natural convection problems: differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Monokrousos, Antonios; Henningson, Dan S.

    2017-12-01

    Optimization of natural convection-driven flows may provide significant improvements to the performance of cooling devices, but a theoretical investigation of such flows has been rarely done. The present paper illustrates an efficient gradient-based optimization method for analyzing such systems. We consider numerically the natural convection-driven flow in a differentially heated cavity with three Prandtl numbers (Pr=0.15{-}7) at super-critical conditions. All results and implementations were done with the spectral element code Nek5000. The flow is analyzed using linear direct and adjoint computations about a nonlinear base flow, extracting in particular optimal initial conditions using power iteration and the solution of the full adjoint direct eigenproblem. The cost function for both temperature and velocity is based on the kinetic energy and the concept of entransy, which yields a quadratic functional. Results are presented as a function of Prandtl number, time horizons and weights between kinetic energy and entransy. In particular, it is shown that the maximum transient growth is achieved at time horizons on the order of 5 time units for all cases, whereas for larger time horizons the adjoint mode is recovered as optimal initial condition. For smaller time horizons, the influence of the weights leads either to a concentric temperature distribution or to an initial condition pattern that opposes the mean shear and grows according to the Orr mechanism. For specific cases, it could also been shown that the computation of optimal initial conditions leads to a degenerate problem, with a potential loss of symmetry. In these situations, it turns out that any initial condition lying in a specific span of the eigenfunctions will yield exactly the same transient amplification. As a consequence, the power iteration converges very slowly and fails to extract all possible optimal initial conditions. According to the authors' knowledge, this behavior is illustrated here for the first time.

  3. REVIEWS OF TOPICAL PROBLEMS: Axisymmetric stationary flows in compact astrophysical objects

    NASA Astrophysics Data System (ADS)

    Beskin, Vasilii S.

    1997-07-01

    A review is presented of the analytical results available for a large class of axisymmetric stationary flows in the vicinity of compact astrophysical objects. The determination of the two-dimensional structure of the poloidal magnetic field (hydrodynamic flow field) faces severe difficulties, due to the complexity of the trans-field equation for stationary axisymmetric flows. However, an approach exists which enables direct problems to be solved even within the balance law framework. This possibility arises when an exact solution to the equation is available and flows close to it are investigated. As a result, with the use of simple model problems, the basic features of supersonic flows past real compact objects are determined.

  4. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    PubMed Central

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  5. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  6. Plane Poiseuille flow of a rarefied gas in the presence of strong gravitation.

    PubMed

    Doi, Toshiyuki

    2011-02-01

    Plane Poiseuille flow of a rarefied gas, which flows horizontally in the presence of strong gravitation, is studied based on the Boltzmann equation. Applying the asymptotic analysis for a small variation in the flow direction [Y. Sone, Molecular Gas Dynamics (Birkhäuser, 2007)], the two-dimensional problem is reduced to a one-dimensional problem, as in the case of a Poiseuille flow in the absence of gravitation, and the solution is obtained in a semianalytical form. The reduced one-dimensional problem is solved numerically for a hard sphere molecular gas over a wide range of the gas-rarefaction degree and the gravitational strength. The presence of gravitation reduces the mass flow rate, and the effect of gravitation is significant for large Knudsen numbers. To verify the validity of the asymptotic solution, a two-dimensional problem of a flow through a long channel is directly solved numerically, and the validity of the asymptotic solution is confirmed. ©2011 American Physical Society

  7. An economic study of an advanced technology supersonic cruise vehicle

    NASA Technical Reports Server (NTRS)

    Smith, C. L.; Williams, L. J.

    1975-01-01

    A description is given of the methods used and the results of an economic study of an advanced technology supersonic cruise vehicle. This vehicle was designed for a maximum range of 4000 n.mi. at a cruise speed of Mach 2.7 and carrying 292 passengers. The economic study includes the estimation of aircraft unit cost, operating cost, and idealized cash flow and discounted cash flow return on investment. In addition, it includes a sensitivity study on the effects of unit cost, manufacturing cost, production quantity, average trip length, fuel cost, load factor, and fare on the aircraft's economic feasibility.

  8. Reactive Power Pricing Model Considering the Randomness of Wind Power Output

    NASA Astrophysics Data System (ADS)

    Dai, Zhong; Wu, Zhou

    2018-01-01

    With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.

  9. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  10. Comparative exploration of multidimensional flow cytometry software: a model approach evaluating T cell polyfunctional behavior.

    PubMed

    Spear, Timothy T; Nishimura, Michael I; Simms, Patricia E

    2017-08-01

    Advancement in flow cytometry reagents and instrumentation has allowed for simultaneous analysis of large numbers of lineage/functional immune cell markers. Highly complex datasets generated by polychromatic flow cytometry require proper analytical software to answer investigators' questions. A problem among many investigators and flow cytometry Shared Resource Laboratories (SRLs), including our own, is a lack of access to a flow cytometry-knowledgeable bioinformatics team, making it difficult to learn and choose appropriate analysis tool(s). Here, we comparatively assess various multidimensional flow cytometry software packages for their ability to answer a specific biologic question and provide graphical representation output suitable for publication, as well as their ease of use and cost. We assessed polyfunctional potential of TCR-transduced T cells, serving as a model evaluation, using multidimensional flow cytometry to analyze 6 intracellular cytokines and degranulation on a per-cell basis. Analysis of 7 parameters resulted in 128 possible combinations of positivity/negativity, far too complex for basic flow cytometry software to analyze fully. Various software packages were used, analysis methods used in each described, and representative output displayed. Of the tools investigated, automated classification of cellular expression by nonlinear stochastic embedding (ACCENSE) and coupled analysis in Pestle/simplified presentation of incredibly complex evaluations (SPICE) provided the most user-friendly manipulations and readable output, evaluating effects of altered antigen-specific stimulation on T cell polyfunctionality. This detailed approach may serve as a model for other investigators/SRLs in selecting the most appropriate software to analyze complex flow cytometry datasets. Further development and awareness of available tools will help guide proper data analysis to answer difficult biologic questions arising from incredibly complex datasets. © Society for Leukocyte Biology.

  11. Free boundary problems in shock reflection/diffraction and related transonic flow problems

    PubMed Central

    Chen, Gui-Qiang; Feldman, Mikhail

    2015-01-01

    Shock waves are steep wavefronts that are fundamental in nature, especially in high-speed fluid flows. When a shock hits an obstacle, or a flying body meets a shock, shock reflection/diffraction phenomena occur. In this paper, we show how several long-standing shock reflection/diffraction problems can be formulated as free boundary problems, discuss some recent progress in developing mathematical ideas, approaches and techniques for solving these problems, and present some further open problems in this direction. In particular, these shock problems include von Neumann's problem for shock reflection–diffraction by two-dimensional wedges with concave corner, Lighthill's problem for shock diffraction by two-dimensional wedges with convex corner, and Prandtl-Meyer's problem for supersonic flow impinging onto solid wedges, which are also fundamental in the mathematical theory of multidimensional conservation laws. PMID:26261363

  12. Analysis of the flow field generated near an aircraft engine operating in reverse thrust. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ledwith, W. A., Jr.

    1972-01-01

    A computer solution is developed to the exhaust gas reingestion problem for aircraft operating in the reverse thrust mode on a crosswind-free runway. The computer program determines the location of the inlet flow pattern, whether the exhaust efflux lies within the inlet flow pattern or not, and if so, the approximate time before the reversed flow reaches the engine inlet. The program is written so that the user is free to select discrete runway speeds or to study the entire aircraft deceleration process for both the far field and cross-ingestion problems. While developed with STOL applications in mind, the solution is equally applicable to conventional designs. The inlet and reversed jet flow fields involved in the problem are assumed to be noninteracting. The nacelle model used in determining the inlet flow field is generated using an iterative solution to the Neuman problem from potential flow theory while the reversed jet flow field is adapted using an empirical correlation from the literature. Sample results obtained using the program are included.

  13. Real-Time Load-Side Control of Electric Power Systems

    NASA Astrophysics Data System (ADS)

    Zhao, Changhong

    Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems. (1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control. (2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.

  14. Energy cost and game flow of 5 exer-games in trained players.

    PubMed

    Bronner, Shaw; Pinsker, Russell; Noah, J Adam

    2013-05-01

    To determine energy expenditure and player experience in exer-games designed for novel platforms. Energy cost of 7 trained players was measured in 5 music-based exer-games. Participants answered a questionnaire about "game flow," experience of enjoyment, and immersion in game play. Energy expenditure during game play ranged from moderate to vigorous intensity (4 - 9 MET). Participant achieved highest MET levels and game flow while playing StepMania and lowest MET levels and game flow when playing Wii Just Dance 3(®) and Kinect Dance Central™. Game flow scores positively correlated with MET levels. Physiological measurement and game flow testing during game development may help to optimize exer-game player activity and experience.

  15. Reducing waste and errors: piloting lean principles at Intermountain Healthcare.

    PubMed

    Jimmerson, Cindy; Weber, Dorothy; Sobek, Durward K

    2005-05-01

    The Toyota Production System (TPS), based on industrial engineering principles and operational innovations, is used to achieve waste reduction and efficiency while increasing product quality. Several key tools and principles, adapted to health care, have proved effective in improving hospital operations. Value Stream Maps (VSMs), which represent the key people, material, and information flows required to deliver a product or service, distinguish between value-adding and non-value-adding steps. The one-page Problem-Solving A3 Report guides staff through a rigorous and systematic problem-solving process. PILOT PROJECT at INTERMOUNTAIN HEALTHCARE: In a pilot project, participants made many improvements, ranging from simple changes implemented immediately (for example, heart monitor paper not available when a patient presented with a dysrythmia) to larger projects involving patient or information flow issues across multiple departments. Most of the improvements required little or no investment and reduced significant amounts of wasted time for front-line workers. In one unit, turnaround time for pathologist reports from an anatomical pathology lab was reduced from five to two days. TPS principles and tools are applicable to an endless variety of processes and work settings in health care and can be used to address critical challenges such as medical errors, escalating costs, and staffing shortages.

  16. Wing-section optimization for supersonic viscous flow

    NASA Technical Reports Server (NTRS)

    Item, Cem C.; Baysal, Oktay (Editor)

    1995-01-01

    To improve the shape of a supersonic wing, an automated method that also includes higher fidelity to the flow physics is desirable. With this impetus, an aerodynamic optimization methodology incorporating thin-layer Navier-Stokes equations and sensitivity analysis had been previously developed. Prior to embarking upon the wind design task, the present investigation concentrated on testing the feasibility of the methodology, and the identification of adequate problem formulations, by defining two-dimensional, cost-effective test cases. Starting with two distinctly different initial airfoils, two independent shape optimizations resulted in shapes with similar features: slightly cambered, parabolic profiles with sharp leading- and trailing-edges. Secondly, the normal section to the subsonic portion of the leading edge, which had a high normal angle-of-attack, was considered. The optimization resulted in a shape with twist and camber which eliminated the adverse pressure gradient, hence, exploiting the leading-edge thrust. The wing section shapes obtained in all the test cases had the features predicted by previous studies. Therefore, it was concluded that the flowfield analyses and sensitivity coefficients were computed and fed to the present gradient-based optimizer correctly. Also, as a result of the present two-dimensional study, suggestions were made for the problem formulations which should contribute to an effective wing shape optimization.

  17. MODOPTIM: A general optimization program for ground-water flow model calibration and ground-water management with MODFLOW

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.

  18. Performance and cost characteristics of multi-electron transfer, common ion exchange non-aqueous redox flow batteries

    NASA Astrophysics Data System (ADS)

    Laramie, Sydney M.; Milshtein, Jarrod D.; Breault, Tanya M.; Brushett, Fikile R.; Thompson, Levi T.

    2016-09-01

    Non-aqueous redox flow batteries (NAqRFBs) have recently received considerable attention as promising high energy density, low cost grid-level energy storage technologies. Despite these attractive features, NAqRFBs are still at an early stage of development and innovative design techniques are necessary to improve performance and decrease costs. In this work, we investigate multi-electron transfer, common ion exchange NAqRFBs. Common ion systems decrease the supporting electrolyte requirement, which subsequently improves active material solubility and decreases electrolyte cost. Voltammetric and electrolytic techniques are used to study the electrochemical performance and chemical compatibility of model redox active materials, iron (II) tris(2,2‧-bipyridine) tetrafluoroborate (Fe(bpy)3(BF4)2) and ferrocenylmethyl dimethyl ethyl ammonium tetrafluoroborate (Fc1N112-BF4). These results help disentangle complex cycling behavior observed in flow cell experiments. Further, a simple techno-economic model demonstrates the cost benefits of employing common ion exchange NAqRFBs, afforded by decreasing the salt and solvent contributions to total chemical cost. This study highlights two new concepts, common ion exchange and multi-electron transfer, for NAqRFBs through a demonstration flow cell employing model active species. In addition, the compatibility analysis developed for asymmetric chemistries can apply to other promising species, including organics, metal coordination complexes (MCCs) and mixed MCC/organic systems, enabling the design of low cost NAqRFBs.

  19. Direct medical cost and utility analysis of diabetics outpatient at Karanganyar public hospital

    NASA Astrophysics Data System (ADS)

    Eristina; Andayani, T. M.; Oetari, R. A.

    2017-11-01

    Diabetes Mellitus is a high cost disease, especially in long-term complication treatment. Long-term complication treatment cost was a problem for the patient, it can affect patients quality of life stated with utility value. The purpose of this study was to determine the medical cost, utility value and leverage factors of diabetics outpatient. This study was cross sectional design, data collected from retrospective medical record of the financial and pharmacy department to obtain direct medical cost, utility value taken from EQ-5D-5L questionnaire. Data analyzed by Mann-Whitney and Kruskal-Wallis test. Results of this study were IDR 433,728.00 for the direct medical cost and pharmacy as the biggest cost. EQ-5D-5L questionnaire showed the biggest proportion on each dimension were 61% no problem on mobility dimension, 89% no problems on self-care dimension, 54% slight problems on usual activities dimension, 41% moderate problems on pain/discomfort dimension and 48% moderate problems on anxiety/depresion dimension. Build upon Thailand value set, utility value was 0.833. Direct medical cost was IDR 433,728.00 with leverage factors were pattern therapy, blood glucose level and complication. Utility value was 0.833 with leverage factors were patients characteristic, therapy pattern, blood glucose level and complication.

  20. Solving the negative impact of congestion in the postanesthesia care unit: a cost of opportunity analysis.

    PubMed

    Ruiz-Patiño, Alejandro; Acosta-Ospina, Laura Elena; Rueda, Juan-David

    2017-04-01

    Congestion in the postanesthesia care unit (PACU) leads to the formation of waiting queues for patients being transferred after surgery, negatively affecting hospital resources. As patients recover in the operating room, incoming surgeries are delayed. The purpose of this study was to establish the impact of this phenomenon in multiple settings. An operational mathematical study based on the queuing theory was performed. Average queue length, average queue waiting time, and daily queue waiting time were evaluated. Calculations were based on the mean patient daily flow, PACU length of stay, occupation, and current number of beds. Data was prospectively collected during a period of 2 months, and the entry and exit time was recorded for each patient taken to the PACU. Data was imputed in a computational model made with MS Excel. To account for data uncertainty, deterministic and probabilistic sensitivity analyses for all dependent variables were performed. With a mean patient daily flow of 40.3 and an average PACU length of stay of 4 hours, average total lost surgical opportunity time was estimated at 2.36 hours (95% CI: 0.36-4.74 hours). Cost of opportunity was calculated at $1592 per lost hour. Sensitivity analysis showed that an increase of two beds is required to solve the queue formation. When congestion has a negative impact on cost of opportunity in the surgical setting, queuing analysis grants definitive actions to solve the problem, improving quality of service and resource utilization. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  2. PMARC - PANEL METHOD AMES RESEARCH CENTER

    NASA Technical Reports Server (NTRS)

    Ashby, D. L.

    1994-01-01

    Panel methods are moderate cost tools for solving a wide range of engineering problems. PMARC (Panel Method Ames Research Center) is a potential flow panel code that numerically predicts flow fields around complex three-dimensional geometries. PMARC's predecessor was a panel code named VSAERO which was developed for NASA by Analytical Methods, Inc. PMARC is a new program with many additional subroutines and a well-documented code suitable for powered-lift aerodynamic predictions. The program's open architecture facilitates modifications or additions of new features. Another improvement is the adjustable size code which allows for an optimum match between the computer hardware available to the user and the size of the problem being solved. PMARC can be resized (the maximum number of panels can be changed) in a matter of minutes. Several other state-of-the-art PMARC features include internal flow modeling for ducts and wind tunnel test sections, simple jet plume modeling essential for the analysis and design of powered-lift aircraft, and a time-stepping wake model which allows the study of both steady and unsteady motions. PMARC is a low-order panel method, which means the singularities are distributed with constant strength over each panel. In many cases low-order methods can provide nearly the same accuracy as higher order methods (where the singularities are allowed to vary linearly or quadratically over each panel). Low-order methods have the advantage of a shorter computation time and do not require exact matching between panels. The flow problem is solved by assuming that the body is at rest in a moving flow field. The body is modeled as a closed surface which divides space into two regions -- one region contains the flow field of interest and the other contains a fictitious flow. External flow problems, such as a wing in a uniform stream, have the external region as the flow field of interest and the internal flow as the fictitious flow. This arrangement is reversed for internal flow problems where the internal region contains the flow field of interest and the external flow field is fictitious. In either case it is assumed that the velocity potentials in both regions satisfy Laplace's equation. PMARC has extensive geometry modeling capabilities for handling complex, three-dimensional surfaces. As with all panel methods, the geometry must be modeled by a set of panels. For convenience, the geometry is usually subdivided into several pieces and modeled with sets of panels called patches. A patch may be folded over on itself so that opposing sides of the patch form a common line. For example, wings are normally modeled with a folded patch to form the trailing edge of the wing. PMARC also has the capability to automatically generate a closing tip patch. In the case of a wing, a tip patch could be generated to close off the wing's third side. PMARC has a simple jet model for simulating a jet plume in a crossflow. The jet plume shape, trajectory, and entrainment velocities are computed using the Adler/Baron jet in crossflow code. This information is then passed back to PMARC. The wake model in PMARC is a time-stepping wake model. The wake is convected downstream from the wake separation line by the local velocity flowfield. With each time step, a new row of wake panels is added to the wake at the wake separation line. PMARC also allows an initial wake to be specified if desired, or, as a third option, no wakes need be modeled. The effective presentation of results for aerodynamics problems requires the generation of report-quality graphics. PMAPP (ARC-12751), the Panel Method Aerodynamic Plotting Program, (Sterling Software), was written for scientists at NASA's Ames Research Center to plot the aerodynamic analysis results (flow data) from PMARC. PMAPP is an interactive, color-capable graphics program for the DEC VAX or MicroVAX running VMS. It was designed to work with a variety of terminal types and hardcopy devices. PMAPP is available separately from COSMIC. PMARC was written in standard FORTRAN77 using adjustable size arrays throughout the code. Redimensioning PMARC will change the amount of disk space and memory the code requires to be able to run; however, due to its memory requirements, this program does not readily lend itself to implementation on MS-DOS based machines. The program was implemented on an Apple Macintosh (using 2.5 MB of memory) and tested on a VAX/VMS computer. The program is available on a 3.5 inch Macintosh format diskette (standard media) or in VAX BACKUP format on TK50 tape cartridge or 9-track magnetic tape. PMARC was developed in 1989.

  3. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  4. Business process re-engineering a cardiology department.

    PubMed

    Bakshi, Syed Murtuza Hussain

    2014-01-01

    The health care sector is the world's third largest industry and is facing several problems such as excessive waiting times for patients, lack of access to information, high costs of delivery and medical errors. Health care managers seek the help of process re-engineering methods to discover the best processes and to re-engineer existing processes to optimize productivity without compromising on quality. Business process re-engineering refers to the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality and speed. The present study is carried out at a tertiary care corporate hospital with 1000-plus-bed facility. A descriptive study and case study method is used with intensive, careful and complete observation of patient flow, delays, short comings in patient movement and workflow. Data is collected through observations, informal interviews and analyzed by matrix analysis. Flowcharts were drawn for the various work activities of the cardiology department including workflow of the admission process, workflow in the ward and ICCU, workflow of the patient for catheterization laboratory procedure, and in the billing and discharge process. The problems of the existing system were studied and necessary suggestions were recommended to cardiology department module with an illustrated flowchart.

  5. Parallel Simulation of Three-Dimensional Free-Surface Fluid Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAER,THOMAS A.; SUBIA,SAMUEL R.; SACKINGER,PHILIP A.

    2000-01-18

    We describe parallel simulations of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact lines. The Galerlin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of problem unknowns. Issues concerning the proper constraints along the solid-fluid dynamic contact line inmore » three dimensions are discussed. Parallel computations are carried out for an example taken from the coating flow industry, flow in the vicinity of a slot coater edge. This is a three-dimensional free-surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another part of the flow domain. Discussion focuses on parallel speedups for fixed problem size, a class of problems of immediate practical importance.« less

  6. PHYSICS REQUIRES A SIMPLE LOW MACH NUMBER FLOW TO BE COMPRESSIBLE

    EPA Science Inventory

    Radial, laminar, plane, low velocity flow represents the simplest, non-linear fluid dynamics problem. Ostensibly this apparently trivial flow could be solved using the incompressible Navier-Stokes equations, universally believed to be adequate for such problems. Most researchers ...

  7. Economic evaluation of laboratory testing strategies for hospital-associated Clostridium difficile infection.

    PubMed

    Schroeder, Lee F; Robilotti, Elizabeth; Peterson, Lance R; Banaei, Niaz; Dowdy, David W

    2014-02-01

    Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI.

  8. Economic Evaluation of Laboratory Testing Strategies for Hospital-Associated Clostridium difficile Infection

    PubMed Central

    Robilotti, Elizabeth; Peterson, Lance R.; Banaei, Niaz; Dowdy, David W.

    2014-01-01

    Clostridium difficile infection (CDI) is the most common cause of infectious diarrhea in health care settings, and for patients presumed to have CDI, their isolation while awaiting laboratory results is costly. Newer rapid tests for CDI may reduce this burden, but the economic consequences of different testing algorithms remain unexplored. We used decision analysis from the hospital perspective to compare multiple CDI testing algorithms for adult inpatients with suspected CDI, assuming patient management according to laboratory results. CDI testing strategies included combinations of on-demand PCR (odPCR), batch PCR, lateral-flow diagnostics, plate-reader enzyme immunoassay, and direct tissue culture cytotoxicity. In the reference scenario, algorithms incorporating rapid testing were cost-effective relative to nonrapid algorithms. For every 10,000 symptomatic adults, relative to a strategy of treating nobody, lateral-flow glutamate dehydrogenase (GDH)/odPCR generated 831 true-positive results and cost $1,600 per additional true-positive case treated. Stand-alone odPCR was more effective and more expensive, identifying 174 additional true-positive cases at $6,900 per additional case treated. All other testing strategies were dominated by (i.e., more costly and less effective than) stand-alone odPCR or odPCR preceded by lateral-flow screening. A cost-benefit analysis (including estimated costs of missed cases) favored stand-alone odPCR in most settings but favored odPCR preceded by lateral-flow testing if a missed CDI case resulted in less than $5,000 of extended hospital stay costs and <2 transmissions, if lateral-flow GDH diagnostic sensitivity was >93%, or if the symptomatic carrier proportion among the toxigenic culture-positive cases was >80%. These results can aid guideline developers and laboratory directors who are considering rapid testing algorithms for diagnosing CDI. PMID:24478478

  9. Optimization of power systems with voltage security constraints

    NASA Astrophysics Data System (ADS)

    Rosehart, William Daniel

    As open access market principles are applied to power systems, significant changes in their operation and control are occurring. In the new marketplace, power systems are operating under higher loading conditions as market influences demand greater attention to operating cost versus stability margins. Since stability continues to be a basic requirement in the operation of any power system, new tools are being considered to analyze the effect of stability on the operating cost of the system, so that system stability can be incorporated into the costs of operating the system. In this thesis, new optimal power flow (OPF) formulations are proposed based on multi-objective methodologies to optimize active and reactive power dispatch while maximizing voltage security in power systems. The effects of minimizing operating costs, minimizing reactive power generation and/or maximizing voltage stability margins are analyzed. Results obtained using the proposed Voltage Stability Constrained OPF formulations are compared and analyzed to suggest possible ways of costing voltage security in power systems. When considering voltage stability margins the importance of system modeling becomes critical, since it has been demonstrated, based on bifurcation analysis, that modeling can have a significant effect of the behavior of power systems, especially at high loading levels. Therefore, this thesis also examines the effects of detailed generator models and several exponential load models. Furthermore, because of its influence on voltage stability, a Static Var Compensator model is also incorporated into the optimization problems.

  10. Interactive computer graphics applications for compressible aerodynamics

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  11. TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, G.J.; Pruess

    1992-11-01

    The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for propermore » applications of TOUGH and related codes.« less

  12. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  13. Surgical streams in the flow of health care financing. The role of surgery in national expenditures: what costs are controllable?

    PubMed Central

    Moore, F D

    1985-01-01

    The dollar flow in United States medical care has been analyzed in terms of a six-level model; this model and the gross 1981 flow data are set forth. Of the estimated $310 billion expended in 1981, it is estimated that $85-$95 billion was the "surgical stream", i.e., that amount expended to take care of surgical patients at a variety of institutional types and including ambulatory care and surgeons' fees. Some of the determinants of surgical flow are reviewed as well as controllable costs and case mix pressures. Surgical complications, when severe, increase routine operative costs by a factor of 8 to 20. Maintenance of high quality in American surgery, despite new manpower pressures, is the single most important factor in cost containment. By voluntary or imposed controls on fees, malpractice premiums, case mix selection, and hospital utilization, a saving of $2.0-$4.0 billion can be seen as reachable and practical. This is five per cent of the surgical stream and is a part of the realistic "achievable" savings of total flow estimated to be about +15 billion or 5 per cent. PMID:3918514

  14. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  15. The dragons of inaction: psychological barriers that limit climate change mitigation and adaptation.

    PubMed

    Gifford, Robert

    2011-01-01

    Most people think climate change and sustainability are important problems, but too few global citizens engaged in high-greenhouse-gas-emitting behavior are engaged in enough mitigating behavior to stem the increasing flow of greenhouse gases and other environmental problems. Why is that? Structural barriers such as a climate-averse infrastructure are part of the answer, but psychological barriers also impede behavioral choices that would facilitate mitigation, adaptation, and environmental sustainability. Although many individuals are engaged in some ameliorative action, most could do more, but they are hindered by seven categories of psychological barriers, or "dragons of inaction": limited cognition about the problem, ideological world views that tend to preclude pro-environmental attitudes and behavior, comparisons with key other people, sunk costs and behavioral momentum, discredence toward experts and authorities, perceived risks of change, and positive but inadequate behavior change. Structural barriers must be removed wherever possible, but this is unlikely to be sufficient. Psychologists must work with other scientists, technical experts, and policymakers to help citizens overcome these psychological barriers.

  16. A Case Study Using Modeling and Simulation to Predict Logistics Supply Chain Issues

    NASA Technical Reports Server (NTRS)

    Tucker, David A.

    2007-01-01

    Optimization of critical supply chains to deliver thousands of parts, materials, sub-assemblies, and vehicle structures as needed is vital to the success of the Constellation Program. Thorough analysis needs to be performed on the integrated supply chain processes to plan, source, make, deliver, and return critical items efficiently. Process modeling provides simulation technology-based, predictive solutions for supply chain problems which enable decision makers to reduce costs, accelerate cycle time and improve business performance. For example, United Space Alliance, LLC utilized this approach in late 2006 to build simulation models that recreated shuttle orbiter thruster failures and predicted the potential impact of thruster removals on logistics spare assets. The main objective was the early identification of possible problems in providing thruster spares for the remainder of the Shuttle Flight Manifest. After extensive analysis the model results were used to quantify potential problems and led to improvement actions in the supply chain. Similarly the proper modeling and analysis of Constellation parts, materials, operations, and information flows will help ensure the efficiency of the critical logistics supply chains and the overall success of the program.

  17. Application of multiphase modelling for vortex occurrence in vertical pump intake - a review

    NASA Astrophysics Data System (ADS)

    Samsudin, M. L.; Munisamy, K. M.; Thangaraju, S. K.

    2015-09-01

    Vortex formation within pump intake is one of common problems faced for power plant cooling water system. This phenomenon, categorised as surface and sub-surface vortices, can lead to several operational problems and increased maintenance costs. Physical model study was recommended from published guidelines but proved to be time and resource consuming. Hence, the use of Computational Fluid Dynamics (CFD) is an attractive alternative in managing the problem. At the early stage, flow analysis was conducted using single phase simulation and found to find good agreement with the observation from physical model study. With the development of computers, multiphase simulation found further enhancement in obtaining accurate results for representing air entrainment and sub-surface vortices which were earlier not well predicted from the single phase simulation. The purpose of this paper is to describe the application of multiphase modelling with CFD analysis for investigating vortex formation for a vertically inverted pump intake. In applying multiphase modelling, there ought to be a balance between the acceptable usage for computational time and resources and the degree of accuracy and realism in the results as expected from the analysis.

  18. Societal costs of underage drinking.

    PubMed

    Miller, Ted R; Levy, David T; Spicer, Rebecca S; Taylor, Dexter M

    2006-07-01

    Despite minimum-purchase-age laws, young people regularly drink alcohol. This study estimated the magnitude and costs of problems resulting from underage drinking by category-traffic crashes, violence, property crime, suicide, burns, drownings, fetal alcohol syndrome, high-risk sex, poisonings, psychoses, and dependency treatment-and compared those costs with associated alcohol sales. Previous studies did not break out costs of alcohol problems by age. For each category of alcohol-related problems, we estimated fatal and nonfatal cases attributable to underage alcohol use. We multiplied alcohol-attributable cases by estimated costs per case to obtain total costs for each problem. Underage drinking accounted for at least 16% of alcohol sales in 2001. It led to 3,170 deaths and 2.6 million other harmful events. The estimated $61.9 billion bill (relative SE = 18.5%) included $5.4 billion in medical costs, $14.9 billion in work loss and other resource costs, and $41.6 billion in lost quality of life. Quality-of-life costs, which accounted for 67% of total costs, required challenging indirect measurement. Alcohol-attributable violence and traffic crashes dominated the costs. Leaving aside quality of life, the societal harm of $1 per drink consumed by an underage drinker exceeded the average purchase price of $0.90 or the associated $0.10 in tax revenues. Recent attention has focused on problems resulting from youth use of illicit drugs and tobacco. In light of the associated substantial injuries, deaths, and high costs to society, youth drinking behaviors merit the same kind of serious attention.

  19. Time-resolved PIV measurements of the flow field in a stenosed, compliant arterial model

    NASA Astrophysics Data System (ADS)

    Geoghegan, P. H.; Buchmann, N. A.; Soria, J.; Jermy, M. C.

    2013-05-01

    Compliant (flexible) structures play an important role in several biological flows including the lungs, heart and arteries. Coronary heart disease is caused by a constriction in the artery due to a build-up of atherosclerotic plaque. This plaque is also of major concern in the carotid artery which supplies blood to the brain. Blood flow within these arteries is strongly influenced by the movement of the wall. To study these problems experimentally in vitro, especially using flow visualisation techniques, can be expensive due to the high-intensity and high-repetition rate light sources required. In this work, time-resolved particle image velocimetry using a relatively low-cost light-emitting diode illumination system was applied to the study of a compliant flow phantom representing a stenosed (constricted) carotid artery experiencing a physiologically realistic flow wave. Dynamic similarity between in vivo and in vitro conditions was ensured in phantom construction by matching the distensibility and the elastic wave propagation wavelength and in the fluid system through matching Reynolds ( Re) and Womersley number ( α) with a maximum, minimum and mean Re of 939, 379 and 632, respectively, and a α of 4.54. The stenosis had a symmetric constriction of 50 % by diameter (75 % by area). Once the flow rate reached a critical value, Kelvin-Helmholtz instabilities were observed to occur in the shear layer between the main jet exiting the stenosis and a reverse flow region that occurred at a radial distance of 0.34 D from the axis of symmetry in the region on interest 0-2.5 D longitudinally downstream from the stenosis exit. The instability had an axis-symmetric nature, but as peak flow rate was approached this symmetry breaks down producing instability in the flow field. The characteristics of the vortex train were sensitive not only to the instantaneous flow rate, but also to whether the flow was accelerating or decelerating globally.

  20. Expected value analysis for integrated supplier selection and inventory control of multi-product inventory system with fuzzy cost

    NASA Astrophysics Data System (ADS)

    Sutrisno, Widowati, Tjahjana, R. Heru

    2017-12-01

    The future cost in many industrial problem is obviously uncertain. Then a mathematical analysis for a problem with uncertain cost is needed. In this article, we deals with the fuzzy expected value analysis to solve an integrated supplier selection and supplier selection problem with uncertain cost where the costs uncertainty is approached by a fuzzy variable. We formulate the mathematical model of the problems fuzzy expected value based quadratic optimization with total cost objective function and solve it by using expected value based fuzzy programming. From the numerical examples result performed by the authors, the supplier selection problem was solved i.e. the optimal supplier was selected for each time period where the optimal product volume of all product that should be purchased from each supplier for each time period was determined and the product stock level was controlled as decided by the authors i.e. it was followed the given reference level.

  1. Numerical solution of the incompressible Navier-Stokes equations. Ph.D. Thesis - Stanford Univ., Mar. 1989

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1990-01-01

    The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.

  2. Engine Systems Ownership Cost Reduction - Aircraft Propulsion Subsystems Integration (APSI)

    DTIC Science & Technology

    1975-08-01

    compreusor fabrication costs. Hybrid Radial Compresscr Diffuser - Combining both the radial and axial sections of a standard diffuser into a single cascade...compressor diffuser by using a single mixed-flow diffuser instead of the separate radial and axial diffuser stator rows. The proposed mixed-flow diffuser...to an axial diffuser. A cost analyses of the hybrid radial diffuser was made and compared to baseline configuration ( radial and axial diffusers). The

  3. Aquarius - A Modelling Package for Groundwater Flow and Coupled Heat Transport in the Range 0.1 to 100 MPa and 0.1 to 1000 C

    NASA Astrophysics Data System (ADS)

    Cook, S. J.

    2009-05-01

    Aquarius is a Windows application that models fluid flow and heat transport under conditions in which fluid buoyancy can significantly impact patterns and magnitudes of fluid flow. The package is designed as a visualization tool through which users can examine flow systems in environments, both low temperature aquifers and regions with elevated PT regimes such as deep sedimentary basins, hydrothermal systems, and contact thermal aureoles. The package includes 4 components: (1) A finite-element mesh generator/assembler capable of representing complex geologic structures. Left-hand, right-hand and alternating linear triangles can be mixed within the mesh. Planer horizontal, planer vertical and cylindrical vertical coordinate sections are supported. (2) A menu-selectable system for setting properties and boundary/initial conditions. The design retains mathematical terminology for all input parameters such as scalars (e.g., porosity), tensors (e.g., permeability), and boundary/initial conditions (e.g., fixed potential). This makes the package an effective instructional aide by linking model requirements with the underlying mathematical concepts of partial differential equations and the solution logic of boundary/initial value problems. (3) Solution algorithms for steady-state and time-transient fluid flow/heat transport problems. For all models, the nonlinear global matrix equations are solved sequentially using over-relaxation techniques. Matrix storage design allows for large (e.g., 20000) element models to run efficiently on a typical PC. (4) A plotting system that supports contouring nodal data (e.g., head), vector plots for flux data (e.g., specific discharge), and colour gradient plots for elemental data (e.g., porosity), water properties (e.g., density), and performance measures (e.g., Peclet numbers). Display graphics can be printed or saved in standard graphic formats (e.g., jpeg). This package was developed from procedural codes in C written originally to model the hydrothermal flow system responsible for contact metamorphism of Utah's Alta Stock (Cook et al., AJS 1997). These codes were reprogrammed in Microsoft C# to take advantage of object oriented design and the capabilities of Microsoft's .NET framework. The package is available at no cost by e-mail request from the author.

  4. Contributions to the understanding of large-scale coherent structures in developing free turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Liu, J. T. C.

    1986-01-01

    Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.

  5. Lean techniques for the improvement of patients’ flow in emergency department

    PubMed Central

    Chan, HY; Lo, SM; Lee, LLY; Lo, WYL; Yu, WC; Wu, YF; Ho, ST; Yeung, RSD; Chan, JTS

    2014-01-01

    BACKGROUND: Emergency departments (EDs) face problems with overcrowding, access block, cost containment, and increasing demand from patients. In order to resolve these problems, there is rising interest to an approach called “lean” management. This study aims to (1) evaluate the current patient flow in ED, (2) to identify and eliminate the non-valued added process, and (3) to modify the existing process. METHODS: It was a quantitative, pre- and post-lean design study with a series of lean management work implemented to improve the admission and blood result waiting time. These included structured re-design process, priority admission triage (PAT) program, enhanced communication with medical department, and use of new high sensitivity troponin-T (hsTnT) blood test. Triage waiting time, consultation waiting time, blood result time, admission waiting time, total processing time and ED length of stay were compared. RESULTS: Among all the processes carried out in ED, the most time consuming processes were to wait for an admission bed (38.24 minutes; SD 66.35) and blood testing result (mean 52.73 minutes, SD 24.03). The triage waiting time and end waiting time for consultation were significantly decreased. The admission waiting time of emergency medical ward (EMW) was significantly decreased from 54.76 minutes to 24.45 minutes after implementation of PAT program (P<0.05). CONCLUSION: The application of lean management can improve the patient flow in ED. Acquiescence to the principle of lean is crucial to enhance high quality emergency care and patient satisfaction. PMID:25215143

  6. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  7. 76 FR 57982 - Building Energy Codes Cost Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2011-BT-BC-0046] Building Energy Codes Cost Analysis Correction In notice document 2011-23236 beginning on page... heading ``Table 1. Cash flow components'' should read ``Table 7. Cash flow components''. [FR Doc. C1-2011...

  8. Leasing versus Borrowing: Evaluating Alternative Forms of Consumer Credit.

    ERIC Educational Resources Information Center

    Nunnally, Bennie H., Jr.; Plath, D. Anthony

    1989-01-01

    Presents a straightforward method for evaluating lease versus borrow (buy) decisions illustrated with actual financing cost data reported to new car purchasers. States that individuals should consider after-tax cash flows associated with alternative arrangements, time in which cash flow occurs, and opportunity cost of capital to identify the least…

  9. Evaluation of a Low-Cost Bubble CPAP System Designed for Resource-Limited Settings.

    PubMed

    Bennett, Desmond J; Carroll, Ryan W; Kacmarek, Robert M

    2018-04-01

    Respiratory compromise is a leading contributor to global neonatal death. CPAP is a method of treatment that helps maintain lung volume during expiration, promotes comfortable breathing, and improves oxygenation. Bubble CPAP is an effective alternative to standard CPAP. We sought to determine the reliability and functionality of a low-cost bubble CPAP device designed for low-resource settings. The low-cost bubble CPAP device was compared to a commercially available bubble CPAP system. The devices were connected to a lung simulator that simulated neonates of 4 different weights with compromised respiratory mechanics (∼1, ∼3, ∼5, and ∼10 kg). The devices' abilities to establish and maintain pressure and flow under normal conditions as well as under conditions of leak were compared. Multiple combinations of pressure levels (5, 8, and 10 cm H 2 O) and flow levels (3, 6, and 10 L/min) were tested. The endurance of both devices was also tested by running the systems continuously for 8 h and measuring the changes in pressure and flow. Both devices performed equivalently during the no-leak and leak trials. While our testing revealed individual differences that were statistically significant and clinically important (>10% difference) within specific CPAP and flow-level settings, no overall comparisons of CPAP or flow were both statistically significant and clinically important. Each device delivered pressures similar to the desired pressures, although the flows delivered by both machines were lower than the set flows in most trials. During the endurance trials, the low-cost device was marginally better at maintaining pressure, while the commercially available device was better at maintaining flow. The low-cost bubble CPAP device evaluated in this study is comparable to a bubble CPAP system used in developed settings. Extensive clinical trials, however, are necessary to confirm its effectiveness. Copyright © 2018 by Daedalus Enterprises.

  10. Using Grey Wolf Algorithm to Solve the Capacitated Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Korayem, L.; Khorsid, M.; Kassem, S. S.

    2015-05-01

    The capacitated vehicle routing problem (CVRP) is a class of the vehicle routing problems (VRPs). In CVRP a set of identical vehicles having fixed capacities are required to fulfill customers' demands for a single commodity. The main objective is to minimize the total cost or distance traveled by the vehicles while satisfying a number of constraints, such as: the capacity constraint of each vehicle, logical flow constraints, etc. One of the methods employed in solving the CVRP is the cluster-first route-second method. It is a technique based on grouping of customers into a number of clusters, where each cluster is served by one vehicle. Once clusters are formed, a route determining the best sequence to visit customers is established within each cluster. The recently bio-inspired grey wolf optimizer (GWO), introduced in 2014, has proven to be efficient in solving unconstrained, as well as, constrained optimization problems. In the current research, our main contributions are: combining GWO with the traditional K-means clustering algorithm to generate the ‘K-GWO’ algorithm, deriving a capacitated version of the K-GWO algorithm by incorporating a capacity constraint into the aforementioned algorithm, and finally, developing 2 new clustering heuristics. The resulting algorithm is used in the clustering phase of the cluster-first route-second method to solve the CVR problem. The algorithm is tested on a number of benchmark problems with encouraging results.

  11. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  12. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data

    NASA Astrophysics Data System (ADS)

    Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar

    2017-04-01

    A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions.

  13. Mantle Circulation Models with variational data assimilation: Inferring past mantle flow and structure from plate motion histories and seismic tomography

    NASA Astrophysics Data System (ADS)

    Bunge, H.; Hagelberg, C.; Travis, B.

    2002-12-01

    EarthScope will deliver data on structure and dynamics of continental North America and the underlying mantle on an unprecedented scale. Indeed, the scope of EarthScope makes its mission comparable to the large remote sensing efforts that are transforming the oceanographic and atmospheric sciences today. Arguably the main impact of new solid Earth observing systems is to transform our use of geodynamic models increasingly from conditions that are data poor to an environment that is data rich. Oceanographers and meteorologists already have made substantial progress in adapting to this environment, by developing new approaches of interpreting oceanographic and atmospheric data objectively through data assimilation methods in their models. However, a similarly rigorous theoretical framework for merging EarthScope derived solid Earth data with geodynamic models has yet to be devised. Here we explore the feasibility of data assimilation in mantle convection studies in an attempt to fit global geodynamic model calculations explicitly to tomographic and tectonic constraints. This is an inverse problem not quite unlike the inverse problem of finding optimal seismic velocity structures faced by seismologists. We derive the generalized inverse of mantle convection from a variational approach and present the adjoint equations of mantle flow. The substantial computational burden associated with solutions to the generalized inverse problem of mantle convection is made feasible using a highly efficient finite element approach based on the 3-D spherical fully parallelized mantle dynamics code TERRA, implemented on a cost-effective topical PC-cluster (geowulf) dedicated specifically to large-scale geophysical simulations. This dedicated geophysical modeling computer allows us to investigate global inverse convection problems having a spatial discretization of less than 50 km throughout the mantle. We present a synthetic high-resolution modeling experiment to demonstrate that mid-Cretaceous mantle structure can be inferred accurately from our inverse approach assuming present-day mantle structure is well-known, even if an initial first guess assumption about the mid-Cretaceous mantle involved only a simple 1-D radial temperature profile. We suggest that geodynamic inverse modeling should make it possible to infer a number of flow parameters from observational constraints of the mantle.

  14. Data Flow in Relation to Life-Cycle Costing of Construction Projects in the Czech Republic

    NASA Astrophysics Data System (ADS)

    Biolek, Vojtěch; Hanák, Tomáš; Marović, Ivan

    2017-10-01

    Life-cycle costing is an important part of every construction project, as it makes it possible to take into consideration future costs relating to the operation and demolition phase of a built structure. In this way, investors can optimize the project design to minimize the total project costs. Even though there have already been some attempts to implement BIM software in the Czech Republic, the current state of affairs does not support automated data flow between the bill of costs and applications that support building facility management. The main aim of this study is to critically evaluate the current situation and outline a future framework that should allow for the use of the data contained in the bill of costs to manage building operating costs.

  15. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  16. Computational fluid dynamics modeling of laminar, transitional, and turbulent flows with sensitivity to streamline curvature and rotational effects

    NASA Astrophysics Data System (ADS)

    Chitta, Varun

    Modeling of complex flows involving the combined effects of flow transition and streamline curvature using two advanced turbulence models, one in the Reynolds-averaged Navier-Stokes (RANS) category and the other in the hybrid RANS-Large eddy simulation (LES) category is considered in this research effort. In the first part of the research, a new scalar eddy-viscosity model (EVM) is proposed, designed to exhibit physically correct responses to flow transition, streamline curvature, and system rotation effects. The four equation model developed herein is a curvature-sensitized version of a commercially available three-equation transition-sensitive model. The physical effects of rotation and curvature (RC) enter the model through the added transport equation, analogous to a transverse turbulent velocity scale. The eddy-viscosity has been redefined such that the proposed model is constrained to reduce to the original transition-sensitive model definition in nonrotating flows or in regions with negligible RC effects. In the second part of the research, the developed four-equation model is combined with a LES technique using a new hybrid modeling framework, dynamic hybrid RANS-LES. The new framework is highly generalized, allowing coupling of any desired LES model with any given RANS model and addresses several deficiencies inherent in most current hybrid models. In the present research effort, the DHRL model comprises of the proposed four-equation model for RANS component and the MILES scheme for LES component. Both the models were implemented into a commercial computational fluid dynamics (CFD) solver and tested on a number of engineering and generic flow problems. Results from both the RANS and hybrid models show successful resolution of the combined effects of transition and curvature with reasonable engineering accuracy, and for only a small increase in computational cost. In addition, results from the hybrid model indicate significant levels of turbulent fluctuations in the flowfield, improved accuracy compared to RANS models predictions, and are obtained at a significant reduction of computational cost compared to full LES models. The results suggest that the advanced turbulence modeling techniques presented in this research effort have potential as practical tools for solving low/high Re flows over blunt/curved bodies for the prediction of transition and RC effects.

  17. An economic analysis of selected strategies for dissolved-oxygen management; Chattahoochee River, Georgia

    USGS Publications Warehouse

    Schefter, John E.; Hirsch, Robert M.

    1980-01-01

    A method for evaluating the cost-effectiveness of alternative strategies for dissolved-oxygen (DO) management is demonstrated, using the Chattahoochee River, GA., as an example. The conceptual framework for the analysis is suggested by the economic theory of production. The minimum flow of the river and the percentage of the total waste inflow receiving nitrification are considered to be two variable inputs to be used in the production of given minimum concentration of DO in the river. Each of the inputs has a cost: the loss of dependable peak hydroelectric generating capacity at Buford Dam associated with flow augmentation and the cost associated with nitrification of wastes. The least-cost combination of minimum flow and waste treatment necessary to achieve a prescribed minimum DO concentration is identified. Results of the study indicate that, in some instances, the waste-assimilation capacity of the Chattahoochee River can be substituted for increased waste treatment; the associated savings in waste-treatment costs more than offset the benefits foregone because of the loss of peak generating capacity at Buford Dam. The sensitivity of the results to the estimates of the cost of replacing peak generating capacity is examined. It is also demonstrated that a flexible approach to the management of DO in the Chattahoochee River may be much more cost effective than a more rigid, institutional approach wherein constraints are placed on the flow of the river and(or) on waste-treatment practices. (USGS)

  18. Design and cost analysis of rapid aquifer restoration systems using flow simulation and quadratic programming.

    USGS Publications Warehouse

    Lefkoff, L.J.; Gorelick, S.M.

    1986-01-01

    Detailed two-dimensional flow simulation of a complex ground-water system is combined with quadratic and linear programming to evaluate design alternatives for rapid aquifer restoration. Results show how treatment and pumping costs depend dynamically on the type of treatment process, and capacity of pumping and injection wells, and the number of wells. The design for an inexpensive treatment process minimizes pumping costs, while an expensive process results in the minimization of treatment costs. Substantial reductions in pumping costs occur with increases in injection capacity or in the number of wells. Treatment costs are reduced by expansions in pumping capacity or injecion capacity. The analysis identifies maximum pumping and injection capacities.-from Authors

  19. Numerical Investigations of Two Typical Unsteady Flows in Turbomachinery Using the Multi-Passage Model

    NASA Astrophysics Data System (ADS)

    Zhou, Di; Lu, Zhiliang; Guo, Tongqing; Shen, Ennan

    2016-06-01

    In this paper, the research on two types of unsteady flow problems in turbomachinery including blade flutter and rotor-stator interaction is made by means of numerical simulation. For the former, the energy method is often used to predict the aeroelastic stability by calculating the aerodynamic work per vibration cycle. The inter-blade phase angle (IBPA) is an important parameter in computation and may have significant effects on aeroelastic behavior. For the latter, the numbers of blades in each row are usually not equal and the unsteady rotor-stator interactions could be strong. An effective way to perform multi-row calculations is the domain scaling method (DSM). These two cases share a common point that the computational domain has to be extended to multi passages (MP) considering their respective features. The present work is aimed at modeling these two issues with the developed MP model. Computational fluid dynamics (CFD) technique is applied to resolve the unsteady Reynolds-averaged Navier-Stokes (RANS) equations and simulate the flow fields. With the parallel technique, the additional time cost due to modeling more passages can be largely decreased. Results are presented on two test cases including a vibrating rotor blade and a turbine stage.

  20. Structure of a tethered polymer under flow using molecular dynamics and hybrid molecular-continuum simulations

    NASA Astrophysics Data System (ADS)

    Delgado-Buscalioni, Rafael; Coveney, Peter V.

    2006-03-01

    We analyse the structure of a single polymer tethered to a solid surface undergoing a Couette flow. We study the problem using molecular dynamics (MD) and hybrid MD-continuum simulations, wherein the polymer and the surrounding solvent are treated via standard MD, and the solvent flow farther away from the polymer is solved by continuum fluid dynamics (CFD). The polymer represents a freely jointed chain (FJC) and is modelled by Lennard-Jones (LJ) beads interacting through the FENE potential. The solvent (modelled as a LJ fluid) and a weakly attractive wall are treated at the molecular level. At large shear rates the polymer becomes more elongated than predicted by existing theoretical scaling laws. Also, along the normal-to-wall direction the structure observed for the FJC is, surprisingly, very similar to that predicted for a semiflexible chain. Comparison with previous Brownian dynamics simulations (which exclude both solvent and wall potential) indicates that these effects are due to the polymer-solvent and polymer-wall interactions. The hybrid simulations are in perfect agreement with the MD simulations, showing no trace of finite size effects. Importantly, the extra cost required to couple the MD and CFD domains is negligible.

  1. Geometric and topological characterization of porous media: insights from eigenvector centrality

    NASA Astrophysics Data System (ADS)

    Jimenez-Martinez, J.; Negre, C.

    2017-12-01

    Solving flow and transport through complex geometries such as porous media involves an extreme computational cost. Simplifications such as pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models have the ability to preserve the connectivity of the medium. However, they have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Network theory approaches, where the complex network is conceptualized like a graph, can help to simplify and better understand fluid dynamics and transport in porous media. To address this issue, we propose a method based on eigenvector centrality. It has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction which allows considering the flow and transport anisotropy in porous media. The model predictions are compared with millifluidic transport experiments, showing that this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. Entropy computed from the eigenvector centrality probability distribution is proposed as an indicator of the "mixing capacity" of the system.

  2. Large eddy simulation for aerodynamics: status and perspectives.

    PubMed

    Sagaut, Pierre; Deck, Sébastien

    2009-07-28

    The present paper provides an up-to-date survey of the use of large eddy simulation (LES) and sequels for engineering applications related to aerodynamics. Most recent landmark achievements are presented. Two categories of problem may be distinguished whether the location of separation is triggered by the geometry or not. In the first case, LES can be considered as a mature technique and recent hybrid Reynolds-averaged Navier-Stokes (RANS)-LES methods do not allow for a significant increase in terms of geometrical complexity and/or Reynolds number with respect to classical LES. When attached boundary layers have a significant impact on the global flow dynamics, the use of hybrid RANS-LES remains the principal strategy to reduce computational cost compared to LES. Another striking observation is that the level of validation is most of the time restricted to time-averaged global quantities, a detailed analysis of the flow unsteadiness being missing. Therefore, a clear need for detailed validation in the near future is identified. To this end, new issues, such as uncertainty and error quantification and modelling, will be of major importance. First results dealing with uncertainty modelling in unsteady turbulent flow simulation are presented.

  3. Pressure-Aware Control Layer Optimization for Flow-Based Microfluidic Biochips.

    PubMed

    Wang, Qin; Xu, Yue; Zuo, Shiliang; Yao, Hailong; Ho, Tsung-Yi; Li, Bing; Schlichtmann, Ulf; Cai, Yici

    2017-12-01

    Flow-based microfluidic biochips are attracting increasing attention with successful biomedical applications. One critical issue with flow-based microfluidic biochips is the large number of microvalves that require peripheral control pins. Even using the broadcasting addressing scheme, i.e., one control pin controls multiple microvalves simultaneously, thousands of microvalves would still require hundreds of control prins, which is unrealistic. To address this critical challenge in control scalability, the control-layer multiplexer is introduced to effectively reduce the number of control pins into log scale of the number of microvalves. There are two practical design issues with the control-layer multiplexer: (1) the reliability issue caused by the frequent control-valve switching, and (2) the pressure degradation problem caused by the control-valve switching without pressure refreshing from the pressure source. This paper addresses these two design issues by the proposed Hamming-distance-based switching sequence optimization method and the XOR-based pressure refreshing method. Simulation results demonstrate the effectiveness and efficiency of the proposed methods with an average 77.2% (maximum 89.6%) improvement in total pressure refreshing cost, and an average 88.5% (maximum 90.0%) improvement in pressure deviation.

  4. All-soluble all-iron aqueous redox-flow battery

    DOE PAGES

    Gong, Ke; Xu, Fei; Grunewald, Jonathan B.; ...

    2016-05-03

    The rapid growth of intermittent renewable energy (e.g., wind and solar) demands low-cost and large-scale energy storage systems for smooth and reliable power output, where redox-flow batteries (RFBs) could find their niche. In this work, we introduce the first all-soluble all-iron RFB based on iron as the same redox-active element but with different coordination chemistries in alkaline aqueous system. The adoption of the same redox-active element largely alleviates the challenging problem of cross-contamination of metal ions in RFBs that use two redox-active elements. An all-soluble all-iron RFB is constructed by combining an iron–triethanolamine redox pair (i.e., [Fe(TEOA)OH] –/[Fe(TEOA)(OH)] 2–) andmore » an iron–cyanide redox pair (i.e., Fe(CN) 6 3–/Fe(CN) 6 4–), creating 1.34 V of formal cell voltage. Furthermore, good performance and stability have been demonstrated, after addressing some challenges, including the crossover of the ligand agent. As exemplified by the all-soluble all-iron flow battery, combining redox pairs of the same redox-active element with different coordination chemistries could extend the spectrum of RFBs.« less

  5. Phosphate Detection through a Cost-Effective Carbon Black Nanoparticle-Modified Screen-Printed Electrode Embedded in a Continuous Flow System.

    PubMed

    Talarico, Daria; Cinti, Stefano; Arduini, Fabiana; Amine, Aziz; Moscone, Danila; Palleschi, Giuseppe

    2015-07-07

    An automatable flow system for the continuous and long-term monitoring of the phosphate level has been developed using an amperometric detection method based on the use of a miniaturized sensor. This method is based on the monitoring of an electroactive complex obtained by the reaction between phosphate and molybdate that is consequently reduced at the electrode surface. The use of a screen-printed electrode modified with carbon black nanoparticles (CBNPs) leads to the quantification of the complex at low potential, because CBNPs are capable of electrocatalitically enhancing the phosphomolybdate complex reduction at +125 mV versus Ag/AgCl without fouling problems. The developed system also incorporates reagents and waste storage and is connected to a portable potentiostat for rapid detection and quantification of phosphate. Main analytical parameters, such as working potential, reagent concentration, type of cell, and flow rate, were evaluated and optimized. This system was characterized by a low detection limit (6 μM). Interference studies were carried out. Good recovery percentages comprised between 89 and 131.5% were achieved in different water sources, highlighting its suitability for field measurements.

  6. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  7. Rarefied-continuum gas dynamics transition for SUMS project

    NASA Technical Reports Server (NTRS)

    Cheng, Sin-I

    1989-01-01

    This program is to develop an analytic method for reducing SUMS data for the determination of the undisturbed atmosphere conditions ahead of the shuttle along its descending trajectory. It is divided into an internal flow problem, an external flow problem and their matching conditions. Since the existing method of Direct Simulation Monte Carlo (DSMC) failed completely for the internal flow problem, the emphasis is on the internal flow of a highly non-equilibrium, rarefied air through a short tube of a diameter much less than the gaseous mean free path. A two fluid model analysis of this internal flow problem has been developed and studied with typical results illustrated. A computer program for such an analysis and a technical paper published in Lecture Notes in Physics No. 323 (1989) are included as Appendices 3 and 4. A proposal for in situ determination of the surface accommodation coefficients sigma sub t and sigma e is included in Appendix 5 because of their importance in quantitative data reduction. A two fluid formulation for the external flow problem is included as Appendix 6 and a review article for AIAA on Hypersonic propulsion, much dependent on ambient atmospheric density, is also included as Appendix 7.

  8. Data mining to support simulation modeling of patient flow in hospitals.

    PubMed

    Isken, Mark W; Rajagopalan, Balaji

    2002-04-01

    Spiraling health care costs in the United States are driving institutions to continually address the challenge of optimizing the use of scarce resources. One of the first steps towards optimizing resources is to utilize capacity effectively. For hospital capacity planning problems such as allocation of inpatient beds, computer simulation is often the method of choice. One of the more difficult aspects of using simulation models for such studies is the creation of a manageable set of patient types to include in the model. The objective of this paper is to demonstrate the potential of using data mining techniques, specifically clustering techniques such as K-means, to help guide the development of patient type definitions for purposes of building computer simulation or analytical models of patient flow in hospitals. Using data from a hospital in the Midwest this study brings forth several important issues that researchers need to address when applying clustering techniques in general and specifically to hospital data.

  9. BORE II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bore II, co-developed by Berkeley Lab researchers Frank Hale, Chin-Fu Tsang, and Christine Doughty, provides vital information for solving water quality and supply problems and for improving remediation of contaminated sites. Termed "hydrophysical logging," this technology is based on the concept of measuring repeated depth profiles of fluid electric conductivity in a borehole that is pumping. As fluid enters the wellbore, its distinct electric conductivity causes peaks in the conductivity log that grow and migrate upward with time. Analysis of the evolution of the peaks enables characterization of groundwater flow distribution more quickly, more cost effectively, and with higher resolutionmore » than ever before. Combining the unique interpretation software Bore II with advanced downhole instrumentation (the hydrophysical logging tool), the method quantifies inflow and outflow locations, their associated flow rates, and the basic water quality parameters of the associated formation waters (e.g., pH, oxidation-reduction potential, temperature). In addition, when applied in conjunction with downhole fluid sampling, Bore II makes possible a complete assessment of contaminant concentration within groundwater.« less

  10. A semi-automatic method for extracting thin line structures in images as rooted tree network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less

  11. Reactivation of dead sulfide species in lithium polysulfide flow battery for grid scale energy storage

    DOE PAGES

    Jin, Yang; Zhou, Guangmin; Shi, Feifei; ...

    2017-09-06

    Lithium polysulfide batteries possess several favorable attributes including low cost and high energy density for grid energy storage. However, the precipitation of insoluble and irreversible sulfide species on the surface of carbon and lithium (called “dead” sulfide species) leads to continuous capacity degradation in high mass loading cells, which represents a great challenge. To address this problem, herein we propose a strategy to reactivate dead sulfide species by reacting them with sulfur powder with stirring and heating (70 °C) to recover the cell capacity, and further demonstrate a flow battery system based on the reactivation approach. As a result, ultrahighmore » mass loading (0.125 g cm –3, 2g sulfur in a single cell), high volumetric energy density (135 Wh L –1), good cycle life, and high single-cell capacity are achieved. The high volumetric energy density indicates its promising application for future grid energy storage.« less

  12. Reactivation of dead sulfide species in lithium polysulfide flow battery for grid scale energy storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Yang; Zhou, Guangmin; Shi, Feifei

    Lithium polysulfide batteries possess several favorable attributes including low cost and high energy density for grid energy storage. However, the precipitation of insoluble and irreversible sulfide species on the surface of carbon and lithium (called “dead” sulfide species) leads to continuous capacity degradation in high mass loading cells, which represents a great challenge. To address this problem, herein we propose a strategy to reactivate dead sulfide species by reacting them with sulfur powder with stirring and heating (70 °C) to recover the cell capacity, and further demonstrate a flow battery system based on the reactivation approach. As a result, ultrahighmore » mass loading (0.125 g cm –3, 2g sulfur in a single cell), high volumetric energy density (135 Wh L –1), good cycle life, and high single-cell capacity are achieved. The high volumetric energy density indicates its promising application for future grid energy storage.« less

  13. A Model of Small Capacity Power Plant in Tateli Village, North Sulawesi

    NASA Astrophysics Data System (ADS)

    Sangari, F. J.; Rompas, P. T. D.

    2017-03-01

    The electricity supply in North Sulawesi is still very limited so ubiquitous electric current outage. It makes rural communities have problems in life because most uses electrical energy. One of the solutions is a model of power plants to supply electricity in Tateli village, Minahasa, North Sulawesi, Indonesia. The objective of this research is to get the model that generate electrical energy for household needs through power plant that using a model of Picohydro with cross flow turbine in Tateli village. The method used the study of literature, survey the construction site of the power plant and the characteristics of the location being a place of research, analysis of hydropower ability and analyzing costs of power plant. The result showed that the design model of cross flow turbines used in pico-hydro hydropower installations is connected to a generator to produce electrical energy maximum of 3.29 kW for household needs. This analyze will be propose to local government of Minahasa, North Sulawesi, Indonesia to be followed.

  14. Coandă configured aircraft: A preliminary analytical assessment

    NASA Astrophysics Data System (ADS)

    Hamid, M. F. Abdul; Gires, E.; Harithuddin, A. S. M.; Abu Talib, A. R.; Rafie, A. S. M.; Romli, F. I.; Harmin, M. Y.

    2017-12-01

    The interest in the use of flow control for enhanced aerodynamic performance has grown, particularly in the use of jets (continuous, synthetic, pulsed, etc.), compliant surface, vortex-cell, and others. It has been widely documented that these active control concepts can dramatically alter the behaviour of aerodynamic components like airfoils, wings and bodies. In this conjunction, with the present demands of low-cost and efficient flights, the use of Coandă effect as a lift enhancer has attracted a lot of interest. Tangential jets that take advantage of the Coandă effect to closely follow the contours of the body have been considered to be simple and particularly effective. For this case, a large mass of surrounding air can be entrained, hence amplifying the circulation. In an effort to optimize the aerodynamic performance of an aircraft, such effect will be critically reviewed by taking advantage of recent progress. For this purpose, in this study, the design of a Coandă-configured aircraft wing will be mathematically idealized and modelled as a two-dimensional flow problem.

  15. Application of program generation technology in solving heat and flow problems

    NASA Astrophysics Data System (ADS)

    Wan, Shui; Wu, Bangxian; Chen, Ningning

    2007-05-01

    Based on a new DIY concept for software development, an automatic program-generating technology attached on a software system called as Finite Element Program Generator (FEPG) provides a platform of developing programs, through which a scientific researcher can submit his special physico-mathematical problem to the system in a more direct and convenient way for solution. For solving flow and heat problems by using finite element method, the stabilization technologies and fraction-step methods are adopted to overcome the numerical difficulties caused mainly due to the dominated convection. A couple of benchmark problems are given in this paper as examples to illustrate the usage and the superiority of the automatic program generation technique, including the flow in a lid-driven cavity, the starting flow in a circular pipe, the natural convection in a square cavity, and the flow past a circular cylinder, etc. They are also shown as the verification of the algorithms.

  16. Cost-aware request routing in multi-geography cloud data centres using software-defined networking

    NASA Astrophysics Data System (ADS)

    Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei

    2017-03-01

    Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.

  17. Anaerobic co-digestion of dairy manure and potato waste

    NASA Astrophysics Data System (ADS)

    Yadanaparthi, Sai Krishna Reddy

    Dairy and potato are two important agricultural commodities in Idaho. Both the dairy and potato processing industries produce a huge amount of waste which could cause environmental pollution. To minimize the impact of potential pollution associated with dairy manure (DM) and potato waste (PW), anaerobic co-digestion has been considered as one of the best treatment process. The purpose of this research is to evaluate the anaerobic co-digestion of dairy manure and potato waste in terms of process stability, biogas generation, construction and operating costs, and potential revenue. For this purpose, I conducted 1) a literature review, 2) a lab study on anaerobic co-digestion of dairy manure and potato waste at three different temperature ranges (ambient (20-25°C), mesophilic (35-37°C) and thermophilic (55-57°C) with five mixing ratios (DM:PW-100:0, 90:10, 80:20, 60:40, 40:60), and 3) a financial analysis for anaerobic digesters based on assumed different capital costs and the results from the lab co-digestion study. The literature review indicates that several types of organic waste were co-digested with DM. Dairy manure is a suitable base matter for the co-digestion process in terms of digestion process stability and methane (CH4) production (Chapter 2). The lab tests showed that co-digestion of DM with PW was better than digestion of DM alone in terms of biogas and CH4 productions (Chapter 3). The financial analysis reveals DM and PW can be used as substrate for full size anaerobic digesters to generate positive cash flow within a ten year time period. Based on this research, the following conclusions and recommendations were made: ▸ The ratio of DM:PW-80:20 is recommended at thermophilic temperatures and the ratio of DM:PW-90:10 was recommended at mesophilic temperatures for optimum biogas and CH4 productions. ▸ In cases of anaerobic digesters operated with electricity generation equipment (generators), low cost plug flow digesters (capital cost of 600/cow) operating at thermophilic temperatures are recommended. • The ratio of DM:PW-90:10 or 80:20 is recommended while operating low cost plug flow digesters at thermophilic temperatures. ▸ In cases of anaerobic digesters operated without electricity generation equipment (generators), completely mixed or high or low cost plug flow digesters can be used. • The ratio of DM:PW-80:20 is recommended for completely mixed digesters operated at thermophilic temperatures; • The ratio of DM:PW-90:10 or 80:20 is recommended for high cost plug flow digesters (capital cost of 1,000/cow) operated at thermophilic temperatures; • All of the four co-digested mixing ratios (i.e. DM:PW-90:10 or 80:20 or 60:40 or 40:60) are good for low cost plug flow digesters (capital cost of $600/cow) operated at thermophilic temperatures. The ratio of DM:PW-90:10 is recommended for positive cash flow within the ten year period if the low cost plug flow digesters are operated at mesophilic temperatures.

  18. Optimal feedback control of turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz

    1993-01-01

    Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.

  19. Computation of the shock-wave boundary layer interaction with flow separation

    NASA Technical Reports Server (NTRS)

    Ardonceau, P.; Alziary, T.; Aymer, D.

    1980-01-01

    The boundary layer concept is used to describe the flow near the wall. The external flow is approximated by a pressure displacement relationship (tangent wedge in linearized supersonic flow). The boundary layer equations are solved in finite difference form and the question of the presence and unicity of the solution is considered for the direct problem (assumed pressure) or converse problem (assumed displacement thickness, friction ratio). The coupling algorithm presented implicitly processes the downstream boundary condition necessary to correctly define the interacting boundary layer problem. The algorithm uses a Newton linearization technique to provide a fast convergence.

  20. Cost and performance prospects for composite bipolar plates in fuel cells and redox flow batteries

    NASA Astrophysics Data System (ADS)

    Minke, Christine; Hickmann, Thorsten; dos Santos, Antonio R.; Kunz, Ulrich; Turek, Thomas

    2016-02-01

    Carbon-polymer-composite bipolar plates (BPP) are suitable for fuel cell and flow battery applications. The advantages of both components are combined in a product with high electrical conductivity and good processability in convenient polymer forming processes. In a comprehensive techno-economic analysis of materials and production processes cost factors are quantified. For the first time a technical cost model for BPP is set up with tight integration of material characterization measurements.

  1. An efficient distribution method for nonlinear transport problems in highly heterogeneous stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi

    2016-04-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.

  2. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  3. Numerical Simulation of Interaction of Human Vocal Folds and Fluid Flow

    NASA Astrophysics Data System (ADS)

    Kosík, A.; Feistauer, M.; Horáček, J.; Sváček, P.

    Our goal is to simulate airflow in human vocal folds and their flow-induced vibrations. We consider two-dimensional viscous incompressible flow in a time-dependent domain. The fluid flow is described by the Navier-Stokes equations in the arbitrary Lagrangian-Eulerian formulation. The flow problem is coupled with the elastic behaviour of the solid bodies. The developed solution of the coupled problem based on the finite element method is demonstrated by numerical experiments.

  4. Driving Parameters for Distributed and Centralized Air Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Feron, Eric

    2001-01-01

    This report considers the problem of intersecting aircraft flows under decentralized conflict avoidance rules. Using an Eulerian standpoint (aircraft flow through a fixed control volume), new air traffic control models and scenarios are defined that enable the study of long-term airspace stability problems. Considering a class of two intersecting aircraft flows, it is shown that airspace stability, defined both in terms of safety and performance, is preserved under decentralized conflict resolution algorithms. Performance bounds are derived for the aircraft flow problem under different maneuver models. Besides analytical approaches, numerical examples are presented to test the theoretical results, as well as to generate some insight about the structure of the traffic flow after resolution. Considering more than two intersecting aircraft flows, simulations indicate that flow stability may not be guaranteed under simple conflict avoidance rules. Finally, a comparison is made with centralized strategies to conflict resolution.

  5. St. Louis demonstration final report: refuse processing plant equipment, facilities, and environmental evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiscus, D.E.; Gorman, P.G.; Schrag, M.P.

    1977-09-01

    The results are presented of processing plant evaluations of the St. Louis-Union Electric Refuse Fuel Project, including equipment and facilities as well as assessment of environmental emissions at both the processing and the power plants. Data on plant material flows and operating parameters, plant operating costs, characteristics of plant material flows, and emissions from various processing operations were obtained during a testing program encompassing 53 calendar weeks. Refuse derived fuel (RDF) is the major product (80.6% by weight) of the refuse processing plant, the other being ferrous metal scrap, a marketable by-product. Average operating costs for the entire evaluation periodmore » were $8.26/Mg ($7.49/ton). The average overall processing rate for the period was 168 Mg/8-h day (185.5 tons/8-h day) at 31.0 Mg/h (34.2 tons/h). Future plants using an air classification system of the type used at the St. Louis demonstration plant will need an emissions control device for particulates from the large de-entrainment cyclone. Also in the air exhaust from the cyclone were total counts of bacteria and viruses several times higher than those of suburban ambient air. No water effluent or noise exposure problems were encountered, although landfill leachate mixed with ground water could result in contamination, given low dilution rates.« less

  6. Optimal design of wind barriers using 3D computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Fang, H.; Wu, X.; Yang, X.

    2017-12-01

    Desertification is a significant global environmental and ecological problem that requires human-regulated control and management. Wind barriers are commonly used to reduce wind velocity or trap drifting sand in arid or semi-arid areas. Therefore, optimal design of wind barriers becomes critical in Aeolian engineering. In the current study, we perform 3D computational fluid dynamics (CFD) simulations for flow passing through wind barriers with different structural parameters. To validate the simulation results, we first inter-compare the simulated flow field results with those from both wind-tunnel experiments and field measurements. Quantitative analyses of the shelter effect are then conducted based on a series of simulations with different structural parameters (such as wind barrier porosity, row numbers, inter-row spacing and belt schemes). The results show that wind barriers with porosity of 0.35 could provide the longest shelter distance (i.e., where the wind velocity reduction is more than 50%) thus are recommended in engineering designs. To determine the optimal row number and belt scheme, we introduce a cost function that takes both wind-velocity reduction effects and economical expense into account. The calculated cost function show that a 3-row-belt scheme with inter-row spacing of 6h (h as the height of wind barriers) and inter-belt spacing of 12h is the most effective.

  7. High Response Dew Point Measurement System for a Supersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Blumenthal, Philip Z.

    1996-01-01

    A new high response on-line measurement system has been developed to continuously display and record the air stream dew point in the NASA Lewis 10 x 10 supersonic wind tunnel. Previous instruments suffered from such problems as very slow response, erratic readings, and high susceptibility to contamination. The system operates over the entire pressure level range of the 10 x 10 SWT, from less than 2 psia to 45 psia, without the need for a vacuum pump to provide sample flow. The system speeds up tunnel testing, provides large savings in tunnel power costs and provides the dew point input for the data-reduction subroutines which calculate test section conditions.

  8. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    NASA Technical Reports Server (NTRS)

    Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.

  9. A harmonic pulse testing method for leakage detection in deep subsurface storage formations

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Lu, Jiemin; Hovorka, Susan

    2015-06-01

    Detection of leakage in deep geologic storage formations (e.g., carbon sequestration sites) is a challenging problem. This study investigates an easy-to-implement frequency domain leakage detection technology based on harmonic pulse testing (HPT). Unlike conventional constant-rate pressure interference tests, HPT stimulates a reservoir using periodic injection rates. The fundamental principle underlying HPT-based leakage detection is that leakage modifies a storage system's frequency response function, thus providing clues of system malfunction. During operations, routine HPTs can be conducted at multiple pulsing frequencies to obtain experimental frequency response functions, using which the possible time-lapse changes are examined. In this work, a set of analytical frequency response solutions is derived for predicting system responses with and without leaks for single-phase flow systems. Sensitivity studies show that HPT can effectively reveal the presence of leaks. A search procedure is then prescribed for locating the actual leaks using amplitude and phase information obtained from HPT, and the resulting optimization problem is solved using the genetic algorithm. For multiphase flows, the applicability of HPT-based leakage detection procedure is exemplified numerically using a carbon sequestration problem. Results show that the detection procedure is applicable if the average reservoir conditions in the testing zone stay relatively constant during the tests, which is a working assumption under many other interpretation methods for pressure interference tests. HPT is a cost-effective tool that only requires periodic modification of the nominal injection rate. Thus it can be incorporated into existing monitoring plans with little additional investment.

  10. Geometric MCMC for infinite-dimensional inverse problems

    NASA Astrophysics Data System (ADS)

    Beskos, Alexandros; Girolami, Mark; Lan, Shiwei; Farrell, Patrick E.; Stuart, Andrew M.

    2017-04-01

    Bayesian inverse problems often involve sampling posterior distributions on infinite-dimensional function spaces. Traditional Markov chain Monte Carlo (MCMC) algorithms are characterized by deteriorating mixing times upon mesh-refinement, when the finite-dimensional approximations become more accurate. Such methods are typically forced to reduce step-sizes as the discretization gets finer, and thus are expensive as a function of dimension. Recently, a new class of MCMC methods with mesh-independent convergence times has emerged. However, few of them take into account the geometry of the posterior informed by the data. At the same time, recently developed geometric MCMC algorithms have been found to be powerful in exploring complicated distributions that deviate significantly from elliptic Gaussian laws, but are in general computationally intractable for models defined in infinite dimensions. In this work, we combine geometric methods on a finite-dimensional subspace with mesh-independent infinite-dimensional approaches. Our objective is to speed up MCMC mixing times, without significantly increasing the computational cost per step (for instance, in comparison with the vanilla preconditioned Crank-Nicolson (pCN) method). This is achieved by using ideas from geometric MCMC to probe the complex structure of an intrinsic finite-dimensional subspace where most data information concentrates, while retaining robust mixing times as the dimension grows by using pCN-like methods in the complementary subspace. The resulting algorithms are demonstrated in the context of three challenging inverse problems arising in subsurface flow, heat conduction and incompressible flow control. The algorithms exhibit up to two orders of magnitude improvement in sampling efficiency when compared with the pCN method.

  11. A cost-effective strategy for nonoscillatory convection without clipping

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1990-01-01

    Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.

  12. System for decision analysis support on complex waste management issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shropshire, D.E.

    1997-10-01

    A software system called the Waste Flow Analysis has been developed and applied to complex environmental management processes for the United States Department of Energy (US DOE). The system can evaluate proposed methods of waste retrieval, treatment, storage, transportation, and disposal. Analysts can evaluate various scenarios to see the impacts to waste slows and schedules, costs, and health and safety risks. Decision analysis capabilities have been integrated into the system to help identify preferred alternatives based on a specific objectives may be to maximize the waste moved to final disposition during a given time period, minimize health risks, minimize costs,more » or combinations of objectives. The decision analysis capabilities can support evaluation of large and complex problems rapidly, and under conditions of variable uncertainty. The system is being used to evaluate environmental management strategies to safely disposition wastes in the next ten years and reduce the environmental legacy resulting from nuclear material production over the past forty years.« less

  13. Urban Mining of E-Waste is Becoming More Cost-Effective Than Virgin Mining.

    PubMed

    Zeng, Xianlai; Mathews, John A; Li, Jinhui

    2018-04-17

    Stocks of virgin-mined materials utilized in linear economic flows continue to present enormous challenges. E-waste is one of the fastest growing waste streams, and threatens to grow into a global problem of unmanageable proportions. An effective form of management of resource recycling and environmental improvement is available, in the form of extraction and purification of precious metals taken from waste streams, in a process known as urban mining. In this work, we demonstrate utilizing real cost data from e-waste processors in China that ingots of pure copper and gold could be recovered from e-waste streams at costs that are comparable to those encountered in virgin mining of ores. Our results are confined to the cases of copper and gold extracted and processed from e-waste streams made up of recycled TV sets, but these results indicate a trend and potential if applied across a broader range of e-waste sources and metals extracted. If these results can be extended to other metals and countries, they promise to have positive impact on waste disposal and mining activities globally, as the circular economy comes to displace linear economic pathways.

  14. Analysis of a combined heating and cooling system model under different operating strategies

    NASA Astrophysics Data System (ADS)

    Dzierzgowski, Mieczysław; Zwierzchowski, Ryszard

    2017-11-01

    The paper presents an analysis of a combined heating and cooling system model under different operating strategies. Cooling demand for air conditioning purposes has grown steadily in Poland since the early 1990s. The main clients are large office buildings and shopping malls in downtown locations. Increased demand for heat in the summer would mitigate a number of problems regarding District Heating System (DHS) operation at minimum power, affecting the average annual price of heat (in summertime the share of costs related to transport losses is a strong cost factor). In the paper, computer simulations were performed for different supply network water temperature, assuming as input, real changes in the parameters of the DHS (heat demand, flow rates, etc.). On the basis of calculations and taking into account investment costs of the Absorption Refrigeration System (ARS) and the Thermal Energy Storage (TES) system, an optimal capacity of the TES system was proposed to ensure smooth and efficient operation of the District Heating Plant (DHP). Application of ARS with the TES system in the DHS in question increases net profit by 19.4%, reducing the cooling price for consumers by 40%.

  15. Low cost hydrogen/novel membrane technology for hydrogen separation from synthesis gas. Task 1, Literature survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-02-01

    To make the coal-to-hydrogen route economically attractive, improvements are being sought in each step of the process: coal gasification, water-carbon monoxide shift reaction, and hydrogen separation. This report addresses the use of membranes in the hydrogen separation step. The separation of hydrogen from synthesis gas is a major cost element in the manufacture of hydrogen from coal. Separation by membranes is an attractive, new, and still largely unexplored approach to the problem. Membrane processes are inherently simple and efficient and often have lower capital and operating costs than conventional processes. In this report current ad future trends in hydrogen productionmore » and use are first summarized. Methods of producing hydrogen from coal are then discussed, with particular emphasis on the Texaco entrained flow gasifier and on current methods of separating hydrogen from this gas stream. The potential for membrane separations in the process is then examined. In particular, the use of membranes for H{sub 2}/CO{sub 2}, H{sub 2}/CO, and H{sub 2}/N{sub 2} separations is discussed. 43 refs., 14 figs., 6 tabs.« less

  16. New computer program solves wide variety of heat flow problems

    NASA Technical Reports Server (NTRS)

    Almond, J. C.

    1966-01-01

    Boeing Engineering Thermal Analyzer /BETA/ computer program uses numerical methods to provide accurate heat transfer solutions to a wide variety of heat flow problems. The program solves steady-state and transient problems in almost any situation that can be represented by a resistance-capacitance network.

  17. Financial cost of social exclusion: follow up study of antisocial children into adulthood

    PubMed Central

    Scott, Stephen; Knapp, Martin; Henderson, Juliet; Maughan, Barbara

    2001-01-01

    Objectives To compare the cumulative costs of public services used through to adulthood by individuals with three levels of antisocial behaviour in childhood. Design Costs applied to data of 10 year old children from the inner London longitudinal study selectively followed up to adulthood. Setting Inner London borough. Participants 142 individuals divided into three groups in childhood: no problems, conduct problems, and conduct disorder. Main outcome measures Costs in 1998 prices for public services (excluding private, voluntary agency, indirect, and personal costs) used over and above basic universal provision. Results By age 28, costs for individuals with conduct disorder were 10.0 times higher than for those with no problems (95% confidence interval of bootstrap ratio 3.6 to 20.9) and 3.5 times higher than for those with conduct problems (1.7 to 6.2). Mean individual total costs were £70 019 for the conduct disorder group (bootstrap mean difference from no problem group £62 898; £22 692 to £117 896) and £24 324 (£16 707; £6594 to £28 149) for the conduct problem group, compared with £7423 for the no problem group. In all groups crime incurred the greatest cost, followed by extra educational provision, foster and residential care, and state benefits; health costs were smaller. Parental social class had a relatively small effect on antisocial behaviour, and although substantial independent contributions came from being male, having a low reading age, and attending more than two primary schools, conduct disorder still predicted the greatest cost. Conclusions Antisocial behaviour in childhood is a major predictor of how much an individual will cost society. The cost is large and falls on many agencies, yet few agencies contribute to prevention, which could be cost effective. What is already known on this topicChildren who show substantial antisocial behaviour have poor social functioning as adults and are at high risk of social exclusionCosts are available for particular items of public service such as receiving remedial education or appearing in courtWhat this study addsCosts of antisocial behaviour incurred by individuals from childhood to adulthood were 10 times greater for those who were seriously antisocial in childhood than for those who were notThe costs fell on a wide range of agenciesReduction of antisocial behaviour in childhood could result in large cost savings PMID:11473907

  18. Application of Computational Fluid Dynamics to the Study of Vortex Flow Control for the Management of Inlet Distortion

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Gibb, James

    1992-01-01

    The present study demonstrates that the Reduced Navier-Stokes code RNS3D can be used very effectively to develop a vortex generator installation for the purpose of minimizing the engine face circumferential distortion by controlling the development of secondary flow. The computing times required are small enough that studies such as this are feasible within an analysis-design environment with all its constraints of time and costs. This research study also established the nature of the performance improvements that can be realized with vortex flow control, and suggests a set of aerodynamic properties (called observations) that can be used to arrive at a successful vortex generator installation design. The ultimate aim of this research is to manage inlet distortion by controlling secondary flow through an arrangements of vortex generators configurations tailored to the specific aerodynamic characteristics of the inlet duct. This study also indicated that scaling between flight and typical wind tunnel test conditions is possible only within a very narrow range of generator configurations close to an optimum installation. This paper also suggests a possible law that can be used to scale generator blade height for experimental testing, but further research in this area is needed before it can be effectively applied to practical problems. Lastly, this study indicated that vortex generator installation design for inlet ducts is more complex than simply satisfying the requirement of attached flow, it must satisfy the requirement of minimum engine face distortion.

  19. Power generation costs and ultimate thermal hydraulic power limits in hypothetical advanced designs with natural circulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duffey, R.B.; Rohatgi, U.S.

    Maximum power limits for hypothetical designs of natural circulation plants can be described analytically. The thermal hydraulic design parameters are those which limit the flow, being the elevations, flow areas, and loss coefficients. WE have found some simple ``design`` equations for natural circulation flow to power ratio, and for the stability limit. The analysis of historical and available data for maximum capacity factor estimation shows 80% to be reasonable and achievable. The least cost is obtained by optimizing both hypothetical plant performance for a given output,a nd the plant layout and design. There is also scope to increase output andmore » reduce cost by considering design variations of primary and secondary pressure, and by optimizing component elevations and loss coefficients. The design limits for each are set by stability and maximum flow considerations, which deserve close and careful evaluation.« less

  20. Managing runoff and flow pathways in a small rural catchment to reduce flood risk with other multi-purpose benefits

    NASA Astrophysics Data System (ADS)

    Wilkinson, Mark; Welton, Phil; Kerr, Peter; Quinn, Paul; Jonczyk, Jennine

    2010-05-01

    From 2000 to 2009 there have been a high number of flood events throughout Northern Europe. Meanwhile, there is a demand for land in which to construct homes and businesses on, which is encroaching on land which is prone to flooding. Nevertheless, flood defences usually protect us from this hazard. However, the severity of floods and this demand for land has increased the number of homes which have been flooded in the past ten years. Public spending on flood defences can only go so far which targets the large populations first. Small villages and communities, where in many cases normal flood defences are not cost effective, tend to wait longer for flood mitigation strategies. The Belford Burn (Northumberland, UK) catchment is a small rural catchment that drains an area of 6 km2. It flows through the village of Belford. There is a history of flooding in Belford, with records of flood events dating back to 1877. Normal flood defences are not suitable for this catchment as it failed the Environment Agency (EA) cost benefit criteria for support. There was a desire by the local EA Flood Levy Team and the Northumbria Regional Flood Defence Committee at the Environment Agency to deliver an alternative catchment-based solution to the problem. The EA North East Flood Levy team and Newcastle University have created a partnership to address the flood problem using soft engineered runoff management features. Farm Integrated Runoff Management (FIRM) plans manage flow paths directly by storing slowing and filtering runoff at source on farms. The features are multipurpose addressing water quality, trapping sediment, creating new habitats and storing and attenuating flood flow. Background rainfall and stream stage data have been collected since November 2007. Work on the first mitigation features commenced in July 2008. Since that date five flood events have occurred in the catchment. Two of these flood events caused widespread damage in other areas of the county. However, in Belford only two houses were flooded. Data from the catchment and mitigation features showed that the defence measures resulted in an increase in travel time of the peak and attenuated high flows which would have usually travelled quickly down the channel to the village. For example, the pilot feature appears to have increased the travel time of a flood peak at the top of the catchment from 20 minutes to 35 minutes over a 1 km stretch of channel. There are currently ten active mitigation features present in the catchment. More features are planned for construction this year. Early data from the catchment indicates that the runoff attenuation features are having an impact on reducing flood flows in the channel and also slowing down the flood peak. At the same time the multi-purpose aspects of the features are apparent.

Top