A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon
1989-01-01
The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.
Domain decomposition algorithms and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Chan, Tony F.
1988-01-01
Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.
Domain decomposition algorithms and computation fluid dynamics
NASA Technical Reports Server (NTRS)
Chan, Tony F.
1988-01-01
In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
A computational fluid dynamics algorithm on a massively parallel computer
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon
1989-01-01
The implementation and performance of a finite-difference algorithm for the compressible Navier-Stokes equations in two or three dimensions on the Connection Machine are described. This machine is a single-instruction multiple-data machine with up to 65536 physical processors. The implicit portion of the algorithm is of particular interest. Running times and megadrop rates are given for two- and three-dimensional problems. Included are comparisons with the standard codes on a Cray X-MP/48.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
NASA Technical Reports Server (NTRS)
Weeks, Cindy Lou
1986-01-01
Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.
Research in computational fluid dynamics and analysis of algorithms
NASA Technical Reports Server (NTRS)
Gottlieb, David
1992-01-01
by Carpenter (from the fluid Mechanics Division) and Gottlieb gave analytic conditions for stability as well as asymptotic stability. This had been incorporated in the code in form of stable boundary conditions. Effects of the cylinder rotations had been studied. The results differ from the known theoretical results. We are in the middle of analyzing the results. A detailed analysis of the effects of the heating of the cylinder on the shedding frequency had been studied using the above schemes. It has been found that the shedding frequency decreases when the wire was heated. Experimental work is being carried out to affirm this result.
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily
2016-02-01
The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a) realization of tensor representations of numerical schemes for direct simulation; b) realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile); c) computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
Williams, P.T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
Flowfield-Dependent Mixed Explicit-Implicit (FDMEL) Algorithm for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Garcia, S. M.; Chung, T. J.
1997-01-01
Despite significant achievements in computational fluid dynamics, there still remain many fluid flow phenomena not well understood. For example, the prediction of temperature distributions is inaccurate when temperature gradients are high, particularly in shock wave turbulent boundary layer interactions close to the wall. Complexities of fluid flow phenomena include transition to turbulence, relaminarization separated flows, transition between viscous and inviscid incompressible and compressible flows, among others, in all speed regimes. The purpose of this paper is to introduce a new approach, called the Flowfield-Dependent Mixed Explicit-Implicit (FDMEI) method, in an attempt to resolve these difficult issues in Computational Fluid Dynamics (CFD). In this process, a total of six implicitness parameters characteristic of the current flowfield are introduced. They are calculated from the current flowfield or changes of Mach numbers, Reynolds numbers, Peclet numbers, and Damkoehler numbers (if reacting) at each nodal point and time step. This implies that every nodal point or element is provided with different or unique numerical scheme according to their current flowfield situations, whether compressible, incompressible, viscous, inviscid, laminar, turbulent, reacting, or nonreacting. In this procedure, discontinuities or fluctuations of an variables between adjacent nodal points are determined accurately. If these implicitness parameters are fixed to certain numbers instead of being calculated from the flowfield information, then practically all currently available schemes of finite differences or finite elements arise as special cases. Some benchmark problems to be presented in this paper will show the validity, accuracy, and efficiency of the proposed methodology.
Computer Modeling of Sand Transport on Mars Using a Compart-Mentalized Fluids Algorithm (CFA)
NASA Technical Reports Server (NTRS)
Marshall, J.; Stratton, D.
1999-01-01
of sand comminution on Mars. A multiple-grain transport model using just the equations of grain motion describing lift and drag is impossible to develop owing to stochastic effects --the very effects we wish to model. Also, unless we were to employ supercomputing techniques and extremely complex computer codes that could deal with millions of grains simultaneously, it would also be difficult to model grain transport if we attempted to consider every grain in motion. No existing computer models were found that satisfactorily used the equations of motion to arrive at transport flux numbers for the different populations of saltation and reptation. Modeling all the grains in a transport system was an intractable problem within our resources, and thus we developed what we believe to be a new modeling approach to simulating grain transport. The CFA deals with grain populations, but considers them to belong to various compartmentalized fluid units in the boundary layer. In this way, the model circumvents the multigrain problem by dealing primarily with the consequences of grain transport --momentum transfer between air and grains, which is the physical essence of a dynamic grain-fluid mixture. We thus chose to model the aeolian transport process as a superposition of fluids. These fluids include the air as well as particle populations of various properties. The prime property distinguishing these fluids is upward and downward grain motion. In a normal saltation trajectory, a grain's downwind velocity increases with time, so a rising grain will have a smaller downwind velocity than a failing grain. Because of this disparity in rising and falling grain proper-ties, it seemed appropriate to track these as two separate grain populations within the same physical space. The air itself can be considered a separate fluid superimposed within and interacting with the various grain-cloud "fluids". Additional informaiton is contained in the original.
1973-07-01
COMPUTER ALGORITHMS FOR THE SOLUTION OF THE SHALLOW-FLUID EQUATIONS AS A MEANS OF COMPUTING TERRAIN INFLUENCES ON WIND FIELDS APPENDICES A, B, C AND D By...2- .A. .C-42 AD-A129 066 STUDY AND INVESTIGATION OF COMPUTER ALGORITHMS FOR THE I SOLUTION OF THE SN.. (U) CRAMER (H E) CO INC SALT LAKE CITY UT A G
Physics-Based Computational Algorithm for the Multi-Fluid Plasma Model
2014-06-30
thermodynamic equilibrium. 1.2 Introduction to Fluid Plasma Models Taking moments of the Boltzmann equation, Eq. (1), provides equations that govern the...framework called WARPX (Washington Approximate Riemann Plasma), which uses C++ object oriented programming and other modern software techniques to sim...stability in a shearing box with zero net flux. Astronomy and Astrophysics , 476(3):1113–1122, 2007. [5] V. A. Izzo, D. G. Whyte, R. S. Granetz, P. B
Finite element computational fluid mechanics
NASA Technical Reports Server (NTRS)
Baker, A. J.
1983-01-01
Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.
Computational astrophysical fluid dynamics
NASA Technical Reports Server (NTRS)
Norman, Michael L.; Clarke, David A.; Stone, James M.
1991-01-01
The field of astrophysical fluid dynamics (AFD) is described as an emerging discipline which derives historically from both the theory of stellar evolution and space plasma physics. The fundamental physical assumption behind AFD is that fluid equations of motion accurately describe the evolution of plasmas on scales that are large in comparison with particle interaction length scales. Particular attention is given to purely fluid models of large-scale astrophysical plasmas. The role of computer simulation in AFD research is also highlighted and a suite of general-purpose application codes for AFD research is discussed. The codes are called ZEUS-2D and ZEUS-3D and solve the equations of AFD in two and three dimensions, respectively, in several coordinate geometries for general initial and boundary conditions. The topics of bipolar outflows from protostars, galactic superbubbles and supershells, and extragalactic radio sources are addressed.
NASA Technical Reports Server (NTRS)
Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)
1993-01-01
The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.
General Transient Fluid Flow Algorithm
Amsden, A. A.; Ruppel, H. M.; Hirt, C. W.
1992-03-12
SALE2D calculates two-dimensional fluid flows at all speeds, from the incompressible limit to highly supersonic. An implicit treatment of the pressure calculation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique provides this flow speed flexibility. In addition, the computing mesh may move with the fluid in a typical Lagrangian fashion, be held fixed in an Eulerian manner, or move in some arbitrarily specified way to provide a continuous rezoning capability. This latitude results from use of an Arbitrary Lagrangian-Eulerian (ALE) treatment of the mesh. The partial differential equations solved are the Navier-Stokes equations and the mass and internal energy equations. The fluid pressure is determined from an equation of state and supplemented with an artificial viscous pressure for the computation of shock waves. The computing mesh consists of a two-dimensional network of quadrilateral cells for either cylindrical or Cartesian coordinates, and a variety of user-selectable boundary conditions are provided in the program.
Computation of two-fluid, flowing equilibria
NASA Astrophysics Data System (ADS)
Steinhauer, Loren; Kanki, Takashi; Ishida, Akio
2006-10-01
Equilibria of flowing two-fluid plasmas are computed for realistic compact-toroid and spherical-tokamak parameters. In these examples the two-fluid parameter ɛ (ratio of ion inertial length to overall plasma size) is small, ɛ ˜ 0.03 -- 0.2, but hardly negligible. The algorithm is based on the nearby-fluids model [1] which avoids a singularity that otherwise occurs for small ɛ. These representative equilibria exhibit significant flows, both toroidal and poloidal. Further, the flow patterns display notable flow shear. The importance of two-fluid effects is demonstrated by comparing with analogous equilibria (e.g. fixed toroidal and poloidal current) for a static plasma (Grad-Shafranov solution) and a flowing single-fluid plasma. Differences between the two-fluid, single-fluid, and static equilibria are highlighted: in particular with respect to safety factor profile, flow patterns, and electrical potential. These equilibria are computed using an iterative algorithm: it employs a successive-over-relaxation procedure for updating the magnetic flux function and a Newton-Raphson procedure for updating the density. The algorithm is coded in Visual Basic in an Excel platform on a personal computer. The computational time is essentially instantaneous (seconds). [1] L.C. Steinhauer and A. Ishida, Phys. Plasmas 13, 052513 (2006).
Computational fluid dynamic applications
Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.
2000-04-03
The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1993-01-01
Two papers are included in this progress report. In the first, the compressible Navier-Stokes equations have been used to compute leading edge receptivity of boundary layers over parabolic cylinders. Natural receptivity at the leading edge was simulated and Tollmien-Schlichting waves were observed to develop in response to an acoustic disturbance, applied through the farfield boundary conditions. To facilitate comparison with previous work, all computations were carried out at a free stream Mach number of 0.3. The spatial and temporal behavior of the flowfields are calculated through the use of finite volume algorithms and Runge-Kutta integration. The results are dominated by strong decay of the Tollmien-Schlichting wave due to the presence of the mean flow favorable pressure gradient. The effects of numerical dissipation, forcing frequency, and nose radius are studied. The Strouhal number is shown to have the greatest effect on the unsteady results. In the second paper, a transition model for low-speed flows, previously developed by Young et al., which incorporates first-mode (Tollmien-Schlichting) disturbance information from linear stability theory has been extended to high-speed flow by incorporating the effects of second mode disturbances. The transition model is incorporated into a Reynolds-averaged Navier-Stokes solver with a one-equation turbulence model. Results using a variable turbulent Prandtl number approach demonstrate that the current model accurately reproduces available experimental data for first and second-mode dominated transitional flows. The performance of the present model shows significant improvement over previous transition modeling attempts.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Computational Fluid Dynamics Symposium on Aeropropulsion
NASA Technical Reports Server (NTRS)
1991-01-01
Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.
Grammar Rules as Computer Algorithms.
ERIC Educational Resources Information Center
Rieber, Lloyd
1992-01-01
One college writing teacher engaged his class in the revision of a computer program to check grammar, focusing on improvement of the algorithms for identifying inappropriate uses of the passive voice. Process and problems of constructing new algorithms, effects on student writing, and other algorithm applications are discussed. (MSE)
A Generalized Fluid Formulation for Turbomachinery Computations
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Sankaran, Venkateswaran; Dorney, Daniel J.; Sondak, Douglas L.
2003-01-01
A generalized formulation of the equations of motion of an arbitrary fluid are developed for the purpose of defining a common iterative algorithm for computational procedures. The method makes use of the equations of motion in conservation form with separate pseudo-time derivatives used for defining the numerical flux for a Riemann solver and the convergence algorithm. The partial differential equations are complemented by an thermodynamic and caloric equations of state of a complexity necessary for describing the fluid. Representative solutions with a new code based on this general equation formulation are provided for three turbomachinery problems. The first uses air as a working fluid while the second uses gaseous oxygen in a regime in which real gas effects are of little importance. These nearly perfect gas computations provide a basis for comparing with existing perfect gas code computations. The third case is for the flow of liquid oxygen through a turbine where real gas effects are significant. Vortex shedding predictions with the LOX formulations reduce the discrepancy between perfect gas computations and experiment by approximately an order of magnitude, thereby verifying the real gas formulation as well as providing an effective case where its capabilities are necessary.
Fibonacci Numbers and Computer Algorithms.
ERIC Educational Resources Information Center
Atkins, John; Geist, Robert
1987-01-01
The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)
An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.
Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan
2015-11-01
The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed.
Computational Fluid Dynamics Technology for Hypersonic Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2003-01-01
Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.
Introduction to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Date, Anil W.
2005-08-01
This is a textbook for advanced undergraduate and first-year graduate students in mechanical, aerospace, and chemical engineering. The book emphasizes understanding CFD through physical principles and examples. The author follows a consistent philosophy of control volume formulation of the fundamental laws of fluid motion and energy transfer, and introduces a novel notion of 'smoothing pressure correction' for solution of flow equations on collocated grids within the framework of the well-known SIMPLE algorithm. The subject matter is developed by considering pure conduction/diffusion, convective transport in 2-dimensional boundary layers and in fully elliptic flow situations and phase-change problems in succession. The book includes chapters on discretization of equations for transport of mass, momentum and energy on Cartesian, structured curvilinear and unstructured meshes, solution of discretised equations, numerical grid generation and convergence enhancement. Practicing engineers will find this particularly useful for reference and for continuing education.
Associative Algorithms for Computational Creativity
ERIC Educational Resources Information Center
Varshney, Lav R.; Wang, Jun; Varshney, Kush R.
2016-01-01
Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…
Fluid dynamics parallel computer development at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.
1987-01-01
To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.
Research on Computational Fluid Dynamics and Turbulence
NASA Technical Reports Server (NTRS)
1986-01-01
Preconditioning matrices for Chebyshev derivative operators in several space dimensions; the Jacobi matrix technique in computational fluid dynamics; and Chebyshev techniques for periodic problems are discussed.
Adaptivity and smart algorithms for fluid-structure interaction
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1990-01-01
This paper reviews new approaches in CFD which have the potential for significantly increasing current capabilities of modeling complex flow phenomena and of treating difficult problems in fluid-structure interaction. These approaches are based on the notions of adaptive methods and smart algorithms, which use instantaneous measures of the quality and other features of the numerical flowfields as a basis for making changes in the structure of the computational grid and of algorithms designed to function on the grid. The application of these new techniques to several problem classes are addressed, including problems with moving boundaries, fluid-structure interaction in high-speed turbine flows, flow in domains with receding boundaries, and related problems.
Lattice Boltzmann Algorithms for Fluid Turbulence
2007-06-01
ships Pitaevskii equations ) which are ideal for parallel and aircraft) as well as intermittent turbulence induced in supercomputers because they...accurately determine turbulent flows over non-trivial Navier-Stokes, magnetohydrodynamics (MHD), Gross - boundaries (e.g., instabilities and wakes from naval...the nonlocal nonlinear convective thus purely local. Unlike standard computational fluid derivatives in the Navier-Stokes equations . However, by
Fully explicit algorithms for fluid simulation
NASA Astrophysics Data System (ADS)
Clausen, Jonathan
2011-11-01
Computing hardware is trending towards distributed, massively parallel architectures in order to achieve high computational throughput. For example, Intrepid at Argonne uses 163,840 cores, and next generation machines, such as Sequoia at Lawrence Livermore, will use over one million cores. Harnessing the increasingly parallel nature of computational resources will require algorithms that scale efficiently on these architectures. The advent of GPU-based computation will serve to accelerate this behavior, as a single GPU contains hundreds of processor ``cores.'' Explicit algorithms avoid the communication associated with a linear solve, thus parallel scalability of these algorithms is typically high. This work will explore the efficiency and accuracy of three explicit solution methodologies for the Navier-Stokes equations: traditional artificial compressibility schemes, the lattice-Boltzmann method, and the recently proposed kinetically reduced local Navier-Stokes equations [Borok, Ansumali, and Karlin (2007)]. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Using Computers in Fluids Engineering Education
NASA Technical Reports Server (NTRS)
Benson, Thomas J.
1998-01-01
Three approaches for using computers to improve basic fluids engineering education are presented. The use of computational fluid dynamics solutions to fundamental flow problems is discussed. The use of interactive, highly graphical software which operates on either a modern workstation or personal computer is highlighted. And finally, the development of 'textbooks' and teaching aids which are used and distributed on the World Wide Web is described. Arguments for and against this technology as applied to undergraduate education are also discussed.
Direct modeling for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun
2015-06-01
All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Fluid dynamics computer programs for NERVA turbopump
NASA Technical Reports Server (NTRS)
Brunner, J. J.
1972-01-01
During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.
NASA Technical Reports Server (NTRS)
Shakib, Farzin; Hughes, Thomas J. R.
1991-01-01
A Fourier stability and accuracy analysis of the space-time Galerkin/least-squares method as applied to a time-dependent advective-diffusive model problem is presented. Two time discretizations are studied: a constant-in-time approximation and a linear-in-time approximation. Corresponding space-time predictor multi-corrector algorithms are also derived and studied. The behavior of the space-time algorithms is compared to algorithms based on semidiscrete formulations.
High-order hydrodynamic algorithms for exascale computing
Morgan, Nathaniel Ray
2016-02-05
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.
QPSO-based adaptive DNA computing algorithm.
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Computational fluid dynamics - The coming revolution
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1982-01-01
The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.
Computational fluid dynamics uses in fluid dynamics/aerodynamics education
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1994-01-01
The field of computational fluid dynamics (CFD) has advanced to the point where it can now be used for the purpose of fluid dynamics physics education. Because of the tremendous wealth of information available from numerical simulation, certain fundamental concepts can be efficiently communicated using an interactive graphical interrogation of the appropriate numerical simulation data base. In other situations, a large amount of aerodynamic information can be communicated to the student by interactive use of simple CFD tools on a workstation or even in a personal computer environment. The emphasis in this presentation is to discuss ideas for how this process might be implemented. Specific examples, taken from previous publications, will be used to highlight the presentation.
Three-Dimensional Computational Fluid Dynamics
Haworth, D.C.; O'Rourke, P.J.; Ranganathan, R.
1998-09-01
Computational fluid dynamics (CFD) is one discipline falling under the broad heading of computer-aided engineering (CAE). CAE, together with computer-aided design (CAD) and computer-aided manufacturing (CAM), comprise a mathematical-based approach to engineering product and process design, analysis and fabrication. In this overview of CFD for the design engineer, our purposes are three-fold: (1) to define the scope of CFD and motivate its utility for engineering, (2) to provide a basic technical foundation for CFD, and (3) to convey how CFD is incorporated into engineering product and process design.
Algorithms versus architectures for computational chemistry
NASA Technical Reports Server (NTRS)
Partridge, H.; Bauschlicher, C. W., Jr.
1986-01-01
The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.
Optimal Multistage Algorithm for Adjoint Computation
Aupy, Guillaume; Herrmann, Julien; Hovland, Paul; Robert, Yves
2016-01-01
We reexamine the work of Stumm and Walther on multistage algorithms for adjoint computation. We provide an optimal algorithm for this problem when there are two levels of checkpoints, in memory and on disk. Previously, optimal algorithms for adjoint computations were known only for a single level of checkpoints with no writing and reading costs; a well-known example is the binomial checkpointing algorithm of Griewank and Walther. Stumm and Walther extended that binomial checkpointing algorithm to the case of two levels of checkpoints, but they did not provide any optimality results. We bridge the gap by designing the first optimal algorithm in this context. We experimentally compare our optimal algorithm with that of Stumm and Walther to assess the difference in performance.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
Algorithms in Modern Mathematics and Computer Science.
1980-01-01
A069 912 STANFORD UNIV CA DEPT OF COMPUTER SCIENCE F/6 12/1 ALGORITHMS IN MODERN MATHEMATICS AND COMPUTER SCIENCE .(U) JAN 80 D E KNUTH N00014-76-C...8217 Stanford Department of Computer Scienos aur 1980 Report No. STAN-CS-80-788 LEYEL~ rm ALGORITHMS IN MODERN MATHEMATICS AND COMPUTER SCIENCE by Donald L...Knuth 0 Oct Research sponsored by \\ ~ National Science Foun dation and Office of Naval Rlesearch COMPUTER SCIENCE DEPARlTMENT Stanford University
Algorithm for Computing Particle/Surface Interactions
NASA Technical Reports Server (NTRS)
Hughes, David W.
2009-01-01
An algorithm has been devised for predicting the behaviors of sparsely spatially distributed particles impinging on a solid surface in a rarefied atmosphere. Under the stated conditions, prior particle-transport models in which (1) dense distributions of particles are treated as continuum fluids; or (2) sparse distributions of particles are considered to be suspended in and to diffuse through fluid streams are not valid.
Engineering Fracking Fluids with Computer Simulation
NASA Astrophysics Data System (ADS)
Shaqfeh, Eric
2015-11-01
There are no comprehensive simulation-based tools for engineering the flows of viscoelastic fluid-particle suspensions in fully three-dimensional geometries. On the other hand, the need for such a tool in engineering applications is immense. Suspensions of rigid particles in viscoelastic fluids play key roles in many energy applications. For example, in oil drilling the ``drilling mud'' is a very viscous, viscoelastic fluid designed to shear-thin during drilling, but thicken at stoppage so that the ``cuttings'' can remain suspended. In a related application known as hydraulic fracturing suspensions of solids called ``proppant'' are used to prop open the fracture by pumping them into the well. It is well-known that particle flow and settling in a viscoelastic fluid can be quite different from that which is observed in Newtonian fluids. First, it is now well known that the ``fluid particle split'' at bifurcation cracks is controlled by fluid rheology in a manner that is not understood. Second, in Newtonian fluids, the presence of an imposed shear flow in the direction perpendicular to gravity (which we term a cross or orthogonal shear flow) has no effect on the settling of a spherical particle in Stokes flow (i.e. at vanishingly small Reynolds number). By contrast, in a non-Newtonian liquid, the complex rheological properties induce a nonlinear coupling between the sedimentation and shear flow. Recent experimental data have shown both the shear thinning and the elasticity of the suspending polymeric solutions significantly affects the fluid-particle split at bifurcations, as well as the settling rate of the solids. In the present work, we use the Immersed Boundary Method to develop computer simulations of viscoelastic flow in suspensions of spheres to study these problems. These simulations allow us to understand the detailed physical mechanisms for the remarkable physical behavior seen in practice, and actually suggest design rules for creating new fluid recipes.
Computational fluid dynamics in oil burner design
Butcher, T.A.
1997-09-01
In Computational Fluid Dynamics, the differential equations which describe flow, heat transfer, and mass transfer are approximately solved using a very laborious numerical procedure. Flows of practical interest to burner designs are always turbulent, adding to the complexity of requiring a turbulence model. This paper presents a model for burner design.
HL-20 computational fluid dynamics analysis
NASA Astrophysics Data System (ADS)
Weilmuenster, K. James; Greene, Francis A.
1993-09-01
The essential elements of a computational fluid dynamics analysis of the HL-20/personnel launch system aerothermal environment at hypersonic speeds including surface definition, grid generation, solution techniques, and visual representation of results are presented. Examples of solution technique validation through comparison with data from ground-based facilities are presented, along with results from computations at flight conditions. Computations at flight points indicate that real-gas effects have little or no effect on vehicle aerodynamics and, at these conditions, results from approximate techniques for determining surface heating are comparable with those obtained from Navier-Stokes solutions.
HL-20 computational fluid dynamics analysis
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Greene, Francis A.
1993-01-01
The essential elements of a computational fluid dynamics analysis of the HL-20/personnel launch system aerothermal environment at hypersonic speeds including surface definition, grid generation, solution techniques, and visual representation of results are presented. Examples of solution technique validation through comparison with data from ground-based facilities are presented, along with results from computations at flight conditions. Computations at flight points indicate that real-gas effects have little or no effect on vehicle aerodynamics and, at these conditions, results from approximate techniques for determining surface heating are comparable with those obtained from Navier-Stokes solutions.
Computing Algorithms for Nuffield Advanced Physics.
ERIC Educational Resources Information Center
Summers, M. K.
1978-01-01
Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)
High-Performance Java Codes for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
Graphics supercomputer for computational fluid dynamics research
NASA Astrophysics Data System (ADS)
Liaw, Goang S.
1994-11-01
The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.
Visualization of unsteady computational fluid dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1994-01-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
The development and evaluation of numerical algorithms for MIMD computers
NASA Technical Reports Server (NTRS)
Voigt, Robert G.
1990-01-01
Two activities were pursued under this grant. The first was a visitor program to conduct research on numerical algorithms for MIMD computers. The program is summarized in the following attachments. Attachment A - List of Researchers Supported; Attachment B - List of Reports Completed; and Attachment C - Reports. The second activity was a workshop on the Control of fluid Dynamic Systems held on March 28 to 29, 1989. The workshop is summarized in attachments. Attachment D - Workshop Summary; and Attachment E - List of Workshop Participants.
Analysis of dissection algorithms for vector computers
NASA Technical Reports Server (NTRS)
George, A.; Poole, W. G., Jr.; Voigt, R. G.
1978-01-01
Recently two dissection algorithms (one-way and incomplete nested dissection) have been developed for solving the sparse positive definite linear systems arising from n by n grid problems. Concurrently, vector computers (such as the CDC STAR-100 and TI ASC) have been developed for large scientific applications. An analysis of the use of dissection algorithms on vector computers dictates that vectors of maximum length be utilized thereby implying little or no dissection; on the other hand, minimizing operation counts suggest that considerable dissection be performed. In this paper we discuss the resolution of this conflict by minimizing the total time required by vectorized versions of the two algorithms.
Spectral Methods for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Streett, C. L.; Hussaini, M. Y.
1994-01-01
As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral
NASA Technical Reports Server (NTRS)
Chen, Shu-Po
1999-01-01
This paper presents software for solving the non-conforming fluid structure interfaces in aeroelastic simulation. It reviews the algorithm of interpolation and integration, highlights the flexibility and the user-friendly feature that allows the user to select the existing structure and fluid package, like NASTRAN and CLF3D, to perform the simulation. The presented software is validated by computing the High Speed Civil Transport model.
Parallel Computational Fluid Dynamics: Current Status and Future Requirements
NASA Technical Reports Server (NTRS)
Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)
1994-01-01
One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.
Computational Fluid Dynamics - Applications in Manufacturing Processes
NASA Astrophysics Data System (ADS)
Beninati, Maria Laura; Kathol, Austin; Ziemian, Constance
2012-11-01
A new Computational Fluid Dynamics (CFD) exercise has been developed for the undergraduate introductory fluid mechanics course at Bucknell University. The goal is to develop a computational exercise that students complete which links the manufacturing processes course and the concurrent fluid mechanics course in a way that reinforces the concepts in both. In general, CFD is used as a tool to increase student understanding of the fundamentals in a virtual world. A ``learning factory,'' which is currently in development at Bucknell seeks to use the laboratory as a means to link courses that previously seemed to have little correlation at first glance. A large part of the manufacturing processes course is a project using an injection molding machine. The flow of pressurized molten polyurethane into the mold cavity can also be an example of fluid motion (a jet of liquid hitting a plate) that is applied in manufacturing. The students will run a CFD process that captures this flow using their virtual mold created with a graphics package, such as SolidWorks. The laboratory structure is currently being implemented and analyzed as a part of the ``learning factory''. Lastly, a survey taken before and after the CFD exercise demonstrate a better understanding of both the CFD and manufacturing process.
Computational fluid dynamics in cardiovascular disease.
Lee, Byoung-Kwon
2011-08-01
Computational fluid dynamics (CFD) is a mechanical engineering field for analyzing fluid flow, heat transfer, and associated phenomena, using computer-based simulation. CFD is a widely adopted methodology for solving complex problems in many modern engineering fields. The merit of CFD is developing new and improved devices and system designs, and optimization is conducted on existing equipment through computational simulations, resulting in enhanced efficiency and lower operating costs. However, in the biomedical field, CFD is still emerging. The main reason why CFD in the biomedical field has lagged behind is the tremendous complexity of human body fluid behavior. Recently, CFD biomedical research is more accessible, because high performance hardware and software are easily available with advances in computer science. All CFD processes contain three main components to provide useful information, such as pre-processing, solving mathematical equations, and post-processing. Initial accurate geometric modeling and boundary conditions are essential to achieve adequate results. Medical imaging, such as ultrasound imaging, computed tomography, and magnetic resonance imaging can be used for modeling, and Doppler ultrasound, pressure wire, and non-invasive pressure measurements are used for flow velocity and pressure as a boundary condition. Many simulations and clinical results have been used to study congenital heart disease, heart failure, ventricle function, aortic disease, and carotid and intra-cranial cerebrovascular diseases. With decreasing hardware costs and rapid computing times, researchers and medical scientists may increasingly use this reliable CFD tool to deliver accurate results. A realistic, multidisciplinary approach is essential to accomplish these tasks. Indefinite collaborations between mechanical engineers and clinical and medical scientists are essential. CFD may be an important methodology to understand the pathophysiology of the development and
Computer algorithms to detect bloodstream infections.
Trick, William E; Zagorski, Brandon M; Tokars, Jerome I; Vernon, Michael O; Welbel, Sharon F; Wisniewski, Mary F; Richards, Chesley; Weinstein, Robert A
2004-09-01
We compared manual and computer-assisted bloodstream infection surveillance for adult inpatients at two hospitals. We identified hospital-acquired, primary, central-venous catheter (CVC)-associated bloodstream infections by using five methods: retrospective, manual record review by investigators; prospective, manual review by infection control professionals; positive blood culture plus manual CVC determination; computer algorithms; and computer algorithms and manual CVC determination. We calculated sensitivity, specificity, predictive values, plus the kappa statistic (kappa) between investigator review and other methods, and we correlated infection rates for seven units. The kappa value was 0.37 for infection control review, 0.48 for positive blood culture plus manual CVC determination, 0.49 for computer algorithm, and 0.73 for computer algorithm plus manual CVC determination. Unit-specific infection rates, per 1,000 patient days, were 1.0-12.5 by investigator review and 1.4-10.2 by computer algorithm (correlation r = 0.91, p = 0.004). Automated bloodstream infection surveillance with electronic data is an accurate alternative to surveillance with manually collected data.
Computational fluid dynamics using CATIA created geometry
NASA Astrophysics Data System (ADS)
Gengler, Jeanne E.
1989-07-01
A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.
Parallel and Distributed Computing Combinatorial Algorithms
1993-10-01
FUPNDKC %2,•, PARALLEL AND DISTRIBUTED COMPUTING COMBINATORIAL ALGORITHMS 6. AUTHOR(S) 2304/DS F49620-92-J-0125 DR. LEIGHTON 7 PERFORMING ORGANIZATION NAME...on several problems involving parallel and distributed computing and combinatorial optimization. This research is reported in the numerous papers that...network decom- position. In Proceedings of the Eleventh Annual ACM Symposium on Principles of Distributed Computing , August 1992. [15] B. Awerbuch, B
The use of computers for instruction in fluid dynamics
NASA Technical Reports Server (NTRS)
Watson, Val
1987-01-01
Applications for computers which improve instruction in fluid dynamics are examined. Computers can be used to illustrate three-dimensional flow fields and simple fluid dynamics mechanisms, to solve fluid dynamics problems, and for electronic sketching. The usefulness of computer applications is limited by computer speed, memory, and software and the clarity and field of view of the projected display. Proposed advances in personal computers which will address these limitations are discussed. Long range applications for computers in education are considered.
Computational Fluid Dynamics: Algorithms and Supercomputers
1988-03-01
method) less than 40 x 30 x 20 grid points for Navier-Stokes less than 50 x 30 x 30 grid points for Euler less than 160 x 30 x 30 grid points for grid ... grids use more than one grid to mesh an overall configuration with each individual grid of the system patched together or overset . The sketches shown...in Fig. 6. 3 illustrate several simple patched and overset grid configurations in two dimensions for a typical two body problem. As the sketches
Visualization of unsteady computational fluid dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1995-01-01
The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines.
Bioreactor studies and computational fluid dynamics.
Singh, H; Hutmacher, D W
2009-01-01
The hydrodynamic environment "created" by bioreactors for the culture of a tissue engineered construct (TEC) is known to influence cell migration, proliferation and extra cellular matrix production. However, tissue engineers have looked at bioreactors as black boxes within which TECs are cultured mainly by trial and error, as the complex relationship between the hydrodynamic environment and tissue properties remains elusive, yet is critical to the production of clinically useful tissues. It is well known in the chemical and biotechnology field that a more detailed description of fluid mechanics and nutrient transport within process equipment can be achieved via the use of computational fluid dynamics (CFD) technology. Hence, the coupling of experimental methods and computational simulations forms a synergistic relationship that can potentially yield greater and yet, more cohesive data sets for bioreactor studies. This review aims at discussing the rationale of using CFD in bioreactor studies related to tissue engineering, as fluid flow processes and phenomena have direct implications on cellular response such as migration and/or proliferation. We conclude that CFD should be seen by tissue engineers as an invaluable tool allowing us to analyze and visualize the impact of fluidic forces and stresses on cells and TECs.
Bioreactor Studies and Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Singh, H.; Hutmacher, D. W.
The hydrodynamic environment “created” by bioreactors for the culture of a tissue engineered construct (TEC) is known to influence cell migration, proliferation and extra cellular matrix production. However, tissue engineers have looked at bioreactors as black boxes within which TECs are cultured mainly by trial and error, as the complex relationship between the hydrodynamic environment and tissue properties remains elusive, yet is critical to the production of clinically useful tissues. It is well known in the chemical and biotechnology field that a more detailed description of fluid mechanics and nutrient transport within process equipment can be achieved via the use of computational fluid dynamics (CFD) technology. Hence, the coupling of experimental methods and computational simulations forms a synergistic relationship that can potentially yield greater and yet, more cohesive data sets for bioreactor studies. This review aims at discussing the rationale of using CFD in bioreactor studies related to tissue engineering, as fluid flow processes and phenomena have direct implications on cellular response such as migration and/or proliferation. We conclude that CFD should be seen by tissue engineers as an invaluable tool allowing us to analyze and visualize the impact of fluidic forces and stresses on cells and TECs.
Computational fluid dynamics capability for the solid fuel ramjet projectile
NASA Astrophysics Data System (ADS)
Nusca, Michael J.; Chakravarthy, Sukumar R.; Goldberg, Uriel C.
1988-12-01
A computational fluid dynamics solution of the Navier-Stokes equations has been applied to the internal and external flow of inert solid-fuel ramjet projectiles. Computational modeling reveals internal flowfield details not attainable by flight or wind tunnel measurements, thus contributing to the current investigation into the flight performance of solid-fuel ramjet projectiles. The present code employs numerical algorithms termed total variational diminishing (TVD). Computational solutions indicate the importance of several special features of the code including the zonal grid framework, the TVD scheme, and a recently developed backflow turbulence model. The solutions are compared with results of internal surface pressure measurements. As demonstrated by these comparisons, the use of a backflow turbulence model distinguishes between satisfactory and poor flowfield predictions.
Sawfishes stealth revealed using computational fluid dynamics.
Bradney, D R; Davidson, A; Evans, S P; Wueringer, B E; Morgan, D L; Clausen, P D
2017-02-27
Detailed computational fluid dynamics simulations for the rostrum of three species of sawfish (Pristidae) revealed that negligible turbulent flow is generated from all rostra during lateral swipe prey manipulation and swimming. These results suggest that sawfishes are effective stealth hunters that may not be detected by their teleost prey's lateral line sensory system during pursuits. Moreover, during lateral swipes, the rostra were found to induce little velocity into the surrounding fluid. Consistent with previous data of sawfish feeding behaviour, these data indicate that the rostrum is therefore unlikely to be used to stir up the bottom to uncover benthic prey. Whilst swimming with the rostrum inclined at a small angle to the horizontal, the coefficient of drag of the rostrum is relatively low and the coefficient of lift is zero.
A perspective of computational fluid dynamics
NASA Technical Reports Server (NTRS)
Kutler, P.
1986-01-01
Computational fluid dynamics (CFD) is maturing, and is at a stage in its technological life cycle in which it is now routinely applied to some rather complicated problems; it is starting to create an impact on the design cycle of aerospace flight vehicles and their components. CFD is also being used to better understand the fluid physics of flows heretofore not understood, such as three-dimensional separation. CFD is also being used to complement and is being complemented by experiments. In this paper, the primary and secondary pacing items that govern CFD in the past are reviewed and updated. The future prospects of CFD are explored which will offer people working in the discipline challenges that should extend the technological life cycle to further increase the capabilities of a proven demonstrated technology.
Development of multigrid algorithms for problems from fluid dynamics
NASA Astrophysics Data System (ADS)
Becker, K.; Trottenberg, U.
Multigrid algorithms are developed to demonstrate multigrid technique efficiency for complicated fluid dynamics problems regarding error reduction and discretization accuracy. Subsonic potential 2-D flow around a profile is studied as well as rotation-symmetric flow in a slot between two rotating spheres and the flow in the combustion chamber of Otto engines. The study of the 2-D subsonic potential flow around a profile with the multigrid algorithm is discussed.
Computational fluid dynamics in coronary artery disease.
Sun, Zhonghua; Xu, Lei
2014-12-01
Computational fluid dynamics (CFD) is a widely used method in mechanical engineering to solve complex problems by analysing fluid flow, heat transfer, and associated phenomena by using computer simulations. In recent years, CFD has been increasingly used in biomedical research of coronary artery disease because of its high performance hardware and software. CFD techniques have been applied to study cardiovascular haemodynamics through simulation tools to predict the behaviour of circulatory blood flow in the human body. CFD simulation based on 3D luminal reconstructions can be used to analyse the local flow fields and flow profiling due to changes of coronary artery geometry, thus, identifying risk factors for development and progression of coronary artery disease. This review aims to provide an overview of the CFD applications in coronary artery disease, including biomechanics of atherosclerotic plaques, plaque progression and rupture; regional haemodynamics relative to plaque location and composition. A critical appraisal is given to a more recently developed application, fractional flow reserve based on CFD computation with regard to its diagnostic accuracy in the detection of haemodynamically significant coronary artery disease.
Magnetic Storm Simulation With Multiple Ion Fluids: Algorithm
NASA Astrophysics Data System (ADS)
Toth, G.; Glocer, A.; Gombosi, T.
2008-12-01
We describe our progress in extending the capabilities of the BATS-R-US MHD code to model multiple ion fluids. We solve the full multiion equations with no assumptions about the relative motion of the ion fluids. We discuss the numerical difficulties and the algorithmic solutions: the use of a total ion fluid in combination with the individual ion fluids, the use of point-implicit source terms with analytic Jacobian, using a simple criterion to separate the single-ion and multiion regions in our magnetosphere applications, and an artificial friction term to limit the relative velocities of the ion fluids to reasonable values. This latter term is used to mimic the effect of two-stream instabilities in a crude manner. The new code is fully integrated into the Space Weather Modeling Framework and it has been coupled with the ionosphere, inner magnetosphere and polar wind models to simulate the May 4 1998 magnetic storm.
Shuttle rocket booster computational fluid dynamics
NASA Technical Reports Server (NTRS)
Chung, T. J.; Park, O. Y.
1988-01-01
Additional results and a revised and improved computer program listing from the shuttle rocket booster computational fluid dynamics formulations are presented. Numerical calculations for the flame zone of solid propellants are carried out using the Galerkin finite elements, with perturbations expanded to the zeroth, first, and second orders. The results indicate that amplification of oscillatory motions does indeed prevail in high frequency regions. For the second order system, the trend is similar to the first order system for low frequencies, but instabilities may appear at frequencies lower than those of the first order system. The most significant effect of the second order system is that the admittance is extremely oscillatory between moderately high frequency ranges.
Problem Solving with Generic Algorithms and Computers.
ERIC Educational Resources Information Center
Larson, Jay
Success in using a computer in education as a problem-solving tool requires a change in the way of thinking or of approaching a problem. An algorithm, i.e., a finite step-by-step solution to a problem, can be designed around the data processing concepts of input, processing, and output to provide a basis for classifying problems. If educators…
A modular system for computational fluid dynamics
NASA Astrophysics Data System (ADS)
McCarthy, D. R.; Foutch, D. W.; Shurtleff, G. E.
This paper describes the Modular System for Compuational Fluid Dynamics (MOSYS), a software facility for the construction and execution of arbitrary solution procedures on multizone, structured body-fitted grids. It focuses on the structure and capabilities of MOSYS and the philosophy underlying its design. The system offers different levels of capability depending on the objectives of the user. It enables the applications engineer to quickly apply a variety of methods to geometrically complex problems. The methods developer can implement new algorithms in a simple form, and immediately apply them to problems of both theoretical and practical interest. And for the code builder it consitutes a toolkit for fast construction of CFD codes tailored to various purposes. These capabilities are illustrated through applications to a particularly complex problem encountered in aircraft propulsion systems, namely, the analysis of a landing aircraft in reverse thrust.
Improvement in computational fluid dynamics through boundary verification and preconditioning
NASA Astrophysics Data System (ADS)
Folkner, David E.
This thesis provides improvements to computational fluid dynamics accuracy and efficiency through two main methods: a new boundary condition verification procedure and preconditioning techniques. First, a new verification approach that addresses boundary conditions was developed. In order to apply the verification approach to a large range of arbitrary boundary conditions, it was necessary to develop unifying mathematical formulation. A framework was developed that allows for the application of Dirichlet, Neumann, and extrapolation boundary condition, or in some cases the equations of motion directly. Verification of boundary condition techniques was performed using exact solutions from canonical fluid dynamic test cases. Second, to reduce computation time and improve accuracy, preconditioning algorithms were applied via artificial dissipation schemes. A new convective upwind and split pressure (CUSP) scheme was devised and was shown to be more effective than traditional preconditioning schemes in certain scenarios. The new scheme was compared with traditional schemes for unsteady flows for which both convective and acoustic effects dominated. Both boundary conditions and preconditioning algorithms were implemented in the context of a "strand grid" solver. While not the focus of this thesis, strand grids provide automatic viscous quality meshing and are suitable for moving mesh overset problems.
Algorithms Bridging Quantum Computation and Chemistry
NASA Astrophysics Data System (ADS)
McClean, Jarrod Ryan
The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use
NASA Technical Reports Server (NTRS)
Hirsch, Charles (Editor); Periaux, J. (Editor); Kordulla, W. (Editor)
1992-01-01
A conference was held on Computational Fluid Dynamics (CFD) and produced related papers. Topics included CFD algorithms, transition and turbulent flow, hypersonic reacting flow, incompressible flow, two phase flow and combustion, internal flow, compressible flow, grid generation and adaption, boundary layers, environmental and industrial applications, and non-Newtonian flow.
NASA Astrophysics Data System (ADS)
Hirsch, Charles; Periaux, J.; Kordulla, W.
A conference was held on Computational Fluid Dynamics (CFD) and produced related papers. Topics included CFD algorithms, transition and turbulent flow, hypersonic reacting flow, incompressible flow, two phase flow and combustion, internal flow, compressible flow, grid generation and adaption, boundary layers, environmental and industrial applications, and non-Newtonian flow. For individual titles, see A95-95358 through A95-95507.
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.
Computational methods of the Advanced Fluid Dynamics Model
Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.
1987-01-01
To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.
HYDRA, A finite element computational fluid dynamics code: User manual
Christon, M.A.
1995-06-01
HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.
Algorithms for the Computation of Debris Risks
NASA Technical Reports Server (NTRS)
Matney, Mark
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Computational Fluid Dynamics Program at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1989-01-01
The Computational Fluid Dynamics (CFD) Program at NASA Ames Research Center is reviewed and discussed. The technical elements of the CFD Program are listed and briefly discussed. These elements include algorithm research, research and pilot code development, scientific visualization, advanced surface representation, volume grid generation, and numerical optimization. Next, the discipline of CFD is briefly discussed and related to other areas of research at NASA Ames including experimental fluid dynamics, computer science research, computational chemistry, and numerical aerodynamic simulation. These areas combine with CFD to form a larger area of research, which might collectively be called computational technology. The ultimate goal of computational technology research at NASA Ames is to increase the physical understanding of the world in which we live, solve problems of national importance, and increase the technical capabilities of the aerospace community. Next, the major programs at NASA Ames that either use CFD technology or perform research in CFD are listed and discussed. Briefly, this list includes turbulent/transition physics and modeling, high-speed real gas flows, interdisciplinary research, turbomachinery demonstration computations, complete aircraft aerodynamics, rotorcraft applications, powered lift flows, high alpha flows, multiple body aerodynamics, and incompressible flow applications. Some of the individual problems actively being worked in each of these areas is listed to help define the breadth or extent of CFD involvement in each of these major programs. State-of-the-art examples of various CFD applications are presented to highlight most of these areas. The main emphasis of this portion of the presentation is on examples which will not otherwise be treated at this conference by the individual presentations. Finally, a list of principal current limitations and expected future directions is given.
Nonlinear ship waves and computational fluid dynamics
MIYATA, Hideaki; ORIHARA, Hideo; SATO, Yohei
2014-01-01
Research works undertaken in the first author’s laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship’s motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process. PMID:25311139
Nonlinear ship waves and computational fluid dynamics.
Miyata, Hideaki; Orihara, Hideo; Sato, Yohei
2014-01-01
Research works undertaken in the first author's laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship's motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process.
Computational fluid dynamic modelling of cavitation
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.
1993-01-01
Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Lectures series in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Thompson, Kevin W.
1987-01-01
The lecture notes cover the basic principles of computational fluid dynamics (CFD). They are oriented more toward practical applications than theory, and are intended to serve as a unified source for basic material in the CFD field as well as an introduction to more specialized topics in artificial viscosity and boundary conditions. Each chapter in the test is associated with a videotaped lecture. The basic properties of conservation laws, wave equations, and shock waves are described. The duality of the conservation law and wave representations is investigated, and shock waves are examined in some detail. Finite difference techniques are introduced for the solution of wave equations and conservation laws. Stability analysis for finite difference approximations are presented. A consistent description of artificial viscosity methods are provided. Finally, the problem of nonreflecting boundary conditions are treated.
Computational fluid dynamics: Transition to design applications
NASA Technical Reports Server (NTRS)
Bradley, R. G.; Bhateley, I. C.; Howell, G. A.
1987-01-01
The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.
Nonlinear Fluid Computations in a Distributed Environment
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.; Smith, Merritt H.
1995-01-01
The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.
Artificial Intelligence In Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Vogel, Alison Andrews
1991-01-01
Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.
Computational fluid dynamics modelling in cardiovascular medicine
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Computational fluid dynamics modelling in cardiovascular medicine.
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.
NASA Technical Reports Server (NTRS)
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
Visualization of Unsteady Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1997-01-01
The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient
Computational plasticity algorithm for particle dynamics simulations
NASA Astrophysics Data System (ADS)
Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.
2017-03-01
The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.
Computational Controls Workstation: Algorithms and hardware
NASA Technical Reports Server (NTRS)
Venugopal, R.; Kumar, M.
1993-01-01
The Computational Controls Workstation provides an integrated environment for the modeling, simulation, and analysis of Space Station dynamics and control. Using highly efficient computational algorithms combined with a fast parallel processing architecture, the workstation makes real-time simulation of flexible body models of the Space Station possible. A consistent, user-friendly interface and state-of-the-art post-processing options are combined with powerful analysis tools and model databases to provide users with a complete environment for Space Station dynamics and control analysis. The software tools available include a solid modeler, graphical data entry tool, O(n) algorithm-based multi-flexible body simulation, and 2D/3D post-processors. This paper describes the architecture of the workstation while a companion paper describes performance and user perspectives.
Fast computation algorithms for speckle pattern simulation
Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru
2013-11-13
We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.
Computational Fluid Dynamics of rising droplets
Wagner, Matthew; Francois, Marianne M.
2012-09-05
The main goal of this study is to perform simulations of droplet dynamics using Truchas, a LANL-developed computational fluid dynamics (CFD) software, and compare them to a computational study of Hysing et al.[IJNMF, 2009, 60:1259]. Understanding droplet dynamics is of fundamental importance in liquid-liquid extraction, a process used in the nuclear fuel cycle to separate various components. Simulations of a single droplet rising by buoyancy are conducted in two-dimensions. Multiple parametric studies are carried out to ensure the problem set-up is optimized. An Interface Smoothing Length (ISL) study and mesh resolution study are performed to verify convergence of the calculations. ISL is a parameter for the interface curvature calculation. Further, wall effects are investigated and checked against existing correlations. The ISL study found that the optimal ISL value is 2.5{Delta}x, with {Delta}x being the mesh cell spacing. The mesh resolution study found that the optimal mesh resolution is d/h=40, for d=drop diameter and h={Delta}x. In order for wall effects on terminal velocity to be insignificant, a conservative wall width of 9d or a nonconservative wall width of 7d can be used. The percentage difference between Hysing et al.[IJNMF, 2009, 60:1259] and Truchas for the velocity profiles vary from 7.9% to 9.9%. The computed droplet velocity and interface profiles are found in agreement with the study. The CFD calculations are performed on multiple cores, using LANL's Institutional High Performance Computing.
Computational Fluid Dynamics Modeling of Bacillus anthracis ...
Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict
Adaptive kinetic-fluid solvers for heterogeneous computing architectures
NASA Astrophysics Data System (ADS)
Zabelok, Sergey; Arslanbekov, Robert; Kolobov, Vladimir
2015-12-01
We show feasibility and benefits of porting an adaptive multi-scale kinetic-fluid code to CPU-GPU systems. Challenges are due to the irregular data access for adaptive Cartesian mesh, vast difference of computational cost between kinetic and fluid cells, and desire to evenly load all CPUs and GPUs during grid adaptation and algorithm refinement. Our Unified Flow Solver (UFS) combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. Using GPUs enables hybrid simulations of mixed rarefied-continuum flows with a million of Boltzmann cells each having a 24 × 24 × 24 velocity mesh. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using the discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) solver, and a mesoscopic solver based on the Lattice Boltzmann Method (LBM), all using adaptive Cartesian mesh. Double digit speedups on single GPU and good scaling for multi-GPUs have been demonstrated.
A new fluid-solid interface algorithm for simulating fluid structure problems in FGM plates
NASA Astrophysics Data System (ADS)
Eghtesad, A.; Shafiei, A. R.; Mahzoon, M.
2012-04-01
The capability to track material interfaces, especially in fluid structure problems, is among the advantages of meshless methods. In the present paper, the Smoothed Particle Hydrodynamics (SPH) method is used to investigate elastic-plastic deformation of AL and ceramic-metal FGM (Functionally Graded Materials) plates under the impact of water in a fluid-solid interface. Instead of using an accidental repulsive force which is not stable at higher pressures, a new scheme is proposed to improve the interface contact behavior between fluid and solid structure. This treatment not only prevents the interpenetration of fluid and solid particles significantly, but also maintains the gap distance between fluid and solid boundary particles in a reasonable range. A new scheme called corrected smooth particle method (CSPM) is applied to both fluid and solid particles to improve the free surface behavior. In order to have a more realistic free surface behavior in fluid, a technique is used to detect the free surface boundary particles during the solution process. The results indicate that using the proposed interface algorithm together with CSPM correction, one can predict the dynamic behavior of FGM plates under the impact of fluid very promisingly.
[Research activities in applied mathematics, fluid mechanics, and computer science
NASA Technical Reports Server (NTRS)
1995-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period April 1, 1995 through September 30, 1995.
Research in Applied Mathematics, Fluid Mechanics and Computer Science
NASA Technical Reports Server (NTRS)
1999-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1998 through March 31, 1999.
Computational fluid dynamics applications to improve crop production systems
Technology Transfer Automated Retrieval System (TEKTRAN)
Computational fluid dynamics (CFD), numerical analysis and simulation tools of fluid flow processes have emerged from the development stage and become nowadays a robust design tool. It is widely used to study various transport phenomena which involve fluid flow, heat and mass transfer, providing det...
Computational thermal, chemical, fluid, and solid mechanics for geosystems management.
Davison, Scott; Alger, Nicholas; Turner, Daniel Zack; Subia, Samuel Ramirez; Carnes, Brian; Martinez, Mario J.; Notz, Patrick K.; Klise, Katherine A.; Stone, Charles Michael; Field, Richard V., Jr.; Newell, Pania; Jove-Colon, Carlos F.; Red-Horse, John Robert; Bishop, Joseph E.; Dewers, Thomas A.; Hopkins, Polly L.; Mesh, Mikhail; Bean, James E.; Moffat, Harry K.; Yoon, Hongkyu
2011-09-01
This document summarizes research performed under the SNL LDRD entitled - Computational Mechanics for Geosystems Management to Support the Energy and Natural Resources Mission. The main accomplishment was development of a foundational SNL capability for computational thermal, chemical, fluid, and solid mechanics analysis of geosystems. The code was developed within the SNL Sierra software system. This report summarizes the capabilities of the simulation code and the supporting research and development conducted under this LDRD. The main goal of this project was the development of a foundational capability for coupled thermal, hydrological, mechanical, chemical (THMC) simulation of heterogeneous geosystems utilizing massively parallel processing. To solve these complex issues, this project integrated research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture. This report summarizes and demonstrates the capabilities that were developed together with the supporting research underlying the models. Key accomplishments are: (1) General capability for modeling nonisothermal, multiphase, multicomponent flow in heterogeneous porous geologic materials; (2) General capability to model multiphase reactive transport of species in heterogeneous porous media; (3) Constitutive models for describing real, general geomaterials under multiphase conditions utilizing laboratory data; (4) General capability to couple nonisothermal reactive flow with geomechanics (THMC); (5) Phase behavior thermodynamics for the CO2-H2O-NaCl system. General implementation enables modeling of other fluid mixtures. Adaptive look-up tables enable thermodynamic capability to other simulators; (6) Capability for statistical modeling of heterogeneity in geologic materials; and (7) Simulator utilizes unstructured grids on parallel processing computers.
Computational algorithms to predict Gene Ontology annotations
2015-01-01
Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper
SALE2D. General Transient Fluid Flow Algorithm
Amsden, A.A.; Ruppel, H.M.; Hirt, C.W.
1981-06-01
SALE2D calculates two-dimensional fluid flows at all speeds, from the incompressible limit to highly supersonic. An implicit treatment of the pressure calculation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique provides this flow speed flexibility. In addition, the computing mesh may move with the fluid in a typical Lagrangian fashion, be held fixed in an Eulerian manner, or move in some arbitrarily specified way to provide a continuous rezoning capability. This latitude results from use of an Arbitrary Lagrangian-Eulerian (ALE) treatment of the mesh. The partial differential equations solved are the Navier-Stokes equations and the mass and internal energy equations. The fluid pressure is determined from an equation of state and supplemented with an artificial viscous pressure for the computation of shock waves. The computing mesh consists of a two-dimensional network of quadrilateral cells for either cylindrical or Cartesian coordinates, and a variety of user-selectable boundary conditions are provided in the program.
Algorithms for Computing the Lag Function.
1981-03-27
and S. J. Giner Subject: Algorithms for Computing the Lag Function References: See p . 27 Abstract: This memorandum provides a scheme for the numerical...highly oscillatory, and with singularities at the end points. j -3- 27 March 1981 GHP:SJG:Ihz TABLE OF CONTENTS P age Abstract...0 -9 16 -9 1) 1 11 1 1 -8 3 -1 -t I -8 8 -1 -1 1i 1 2 -6 2 1 1 2 -6 2 1 1 1 3 -3 -1 1 3 -3 -1 1i 1 4 1 1 4 1 -10- 27 March 1981 (1- P : SJG: 1hz The
Computing Properties Of Pure And Mixed Fluids
NASA Technical Reports Server (NTRS)
Fowler, J. R.; Hendricks, Robert C.
1993-01-01
GASPLUS created as two-part code: first designed for use with pure fluids and second designed for use with mixtures of fluids and phases. Offers routines for mathematical modeling of conditions of fluids in pumps, turbines, compressors and other machines. Other routines for calculating performance of para/ortho-hydrogen reactor and heat of para/normal-hydrogen reaction as well as unique convergence routine demonstrates engineering flavor of GASPLUS. Written in FORTRAN 77.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
The Geometric Cluster Algorithm: Rejection-Free Monte Carlo Simulation of Complex Fluids
NASA Astrophysics Data System (ADS)
Luijten, Erik
2005-03-01
The study of complex fluids is an area of intense research activity, in which exciting and counter-intuitive behavior continue to be uncovered. Ironically, one of the very factors responsible for such interesting properties, namely the presence of multiple relevant time and length scales, often greatly complicates accurate theoretical calculations and computer simulations that could explain the observations. We have recently developed a new Monte Carlo simulation methodootnotetextJ. Liu and E. Luijten, Phys. Rev. Lett.92, 035504 (2004); see also Physics Today, March 2004, pp. 25--27. that overcomes this problem for several classes of complex fluids. Our approach can accelerate simulations by orders of magnitude by introducing nonlocal, collective moves of the constituents. Strikingly, these cluster Monte Carlo moves are proposed in such a manner that the algorithm is rejection-free. The identification of the clusters is based upon geometric symmetries and can be considered as the off-latice generalization of the widely-used Swendsen--Wang and Wolff algorithms for lattice spin models. While phrased originally for complex fluids that are governed by the Boltzmann distribution, the geometric cluster algorithm can be used to efficiently sample configurations from an arbitrary underlying distribution function and may thus be applied in a variety of other areas. In addition, I will briefly discuss various extensions of the original algorithm, including methods to influence the size of the clusters that are generated and ways to introduce density fluctuations.
Physical aspects of computing the flow of a viscous fluid
NASA Technical Reports Server (NTRS)
Mehta, U. B.
1984-01-01
One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.
Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)
1998-01-01
This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.
Parallel algorithm for computing points on a computation front hyperplane
NASA Astrophysics Data System (ADS)
Krasnov, M. M.
2015-01-01
A parallel algorithm for computing points on a computation front hyperplane is described. This task arises in the computation of a quantity defined on a multidimensional rectangular domain. Three-dimensional domains are usually discussed, but the material is given in the general form when the number of measurements is at least two. When the values of a quantity at different points are internally independent (which is frequently the case), the corresponding computations are independent as well and can be performed in parallel. However, if there are internal dependences (as, for example, in the Gauss-Seidel method for systems of linear equations), then the order of scanning points of the domain is an important issue. A conventional approach in this case is to form a computation front hyperplane (a usual plane in the three-dimensional case and a line in the two-dimensional case) that moves linearly across the domain at a certain angle. At every step in the course of motion of this hyperplane, its intersection points with the domain can be treated independently and, hence, in parallel, but the steps themselves are executed sequentially. At different steps, the intersection of the hyperplane with the entire domain can have a rather complex geometry and the search for all points of the domain lying on the hyperplane at a given step is a nontrivial problem. This problem (i.e., the computation of the coordinates of points lying in the intersection of the domain with the hyperplane at a given step in the course of hyperplane motion) is addressed below. The computations over the points of the hyperplane can be executed in parallel.
Experiment for validation of fluid-structure interaction models and algorithms.
Hessenthaler, A; Gaddum, N R; Holub, O; Sinkus, R; Röhrle, O; Nordsletten, D
2016-11-04
In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using computer-aided design (CAD) tools. The experimental design aimed at providing a straightforward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by using magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion.
Thermodynamic cost of computation, algorithmic complexity and the information metric
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1989-01-01
Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.
Type II Quantum Computing Algorithm For Computational Fluid Dynamics
2006-03-01
is the Moore - Penrose pseudoinverse [30]. 38 Yepez’s generalized inverse for Ĵ is ( )1 2 2 2 2 22 2 2 2 2 1 1 1ˆ ˆ genJ E E E E Jλλ λ λ...second method is to multiply both sides of (4.27) by a “ generalized inverse ” 1ˆgenJ − , which Yepez has invented. This matrix is similar to the Moore ...his generalized inverse . The generalized inverse is analogous to the inverse of a nonsingular square matrix 1 1 1− − −=M SΛ S . Yepez uses an
Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion
NASA Technical Reports Server (NTRS)
Williams, R. W. (Compiler)
1993-01-01
Conference publication includes 79 abstracts and presentations and 3 invited presentations given at the Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion held at George C. Marshall Space Flight Center, April 20-22, 1993. The purpose of the workshop is to discuss experimental and computational fluid dynamic activities in rocket propulsion. The workshop is an open meeting for government, industry, and academia. A broad number of topics are discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.
Computational thermo-fluid analysis of a disk brake
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kuraishi, Takashi; Tabata, Shinichiro; Takagi, Hirokazu
2016-06-01
We present computational thermo-fluid analysis of a disk brake, including thermo-fluid analysis of the flow around the brake and heat conduction analysis of the disk. The computational challenges include proper representation of the small-scale thermo-fluid behavior, high-resolution representation of the thermo-fluid boundary layers near the spinning solid surfaces, and bringing the heat transfer coefficient (HTC) calculated in the thermo-fluid analysis of the flow to the heat conduction analysis of the spinning disk. The disk brake model used in the analysis closely represents the actual configuration, and this adds to the computational challenges. The components of the method we have developed for computational analysis of the class of problems with these types of challenges include the Space-Time Variational Multiscale method for coupled incompressible flow and thermal transport, ST Slip Interface method for high-resolution representation of the thermo-fluid boundary layers near spinning solid surfaces, and a set of projection methods for different parts of the disk to bring the HTC calculated in the thermo-fluid analysis. With the HTC coming from the thermo-fluid analysis of the flow around the brake, we do the heat conduction analysis of the disk, from the start of the breaking until the disk spinning stops, demonstrating how the method developed works in computational analysis of this complex and challenging problem.
Detecting Neonatal Seizures With Computer Algorithms.
Temko, Andriy; Lightbody, Gordon
2016-10-01
It is now generally accepted that EEG is the only reliable way to accurately detect newborn seizures and, as such, prolonged EEG monitoring is increasingly being adopted in neonatal intensive care units. Long EEG recordings may last from several hours to a few days. With neurophysiologists not always available to review the EEG during unsociable hours, there is a pressing need to develop a reliable and robust automatic seizure detection method-a computer algorithm that can take the EEG signal, process it, and output information that supports clinical decision making. In this study, we review existing algorithms based on how the relevant seizure information is exploited. We start with commonly used methods to extract signatures from seizure signals that range from those that mimic the clinical neurophysiologist to those that exploit mathematical models of neonatal EEG generation. Commonly used classification methods are reviewed that are based on a set of rules and thresholds that are either heuristically tuned or automatically derived from the data. These are followed by techniques to use information about spatiotemporal seizure context. The usual errors in system design and validation are discussed. Current clinical decision support tools that have met regulatory requirements and are available to detect neonatal seizures are reviewed with progress and the outstanding challenges are outlined. This review discusses the current state of the art regarding automatic detection of neonatal seizures.
Fast algorithm for computing complex number-theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
Earth Tide Algorithms for the OMNIS Computer Program System.
1986-04-01
This report presents five computer algorithms that jointly specify the gravitational action by which the tidal redistributions of the Earth’s masses...routine is a simplified version of the fourth and is provided for use during computer program verification. All computer algorithms express the tidal
Two algorithms to compute projected correlation functions in molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Carof, Antoine; Vuilleumier, Rodolphe; Rotenberg, Benjamin
2014-03-01
An explicit derivation of the Mori-Zwanzig orthogonal dynamics of observables is presented and leads to two practical algorithms to compute exactly projected observables (e.g., random noise) and projected correlation function (e.g., memory kernel) from a molecular dynamics trajectory. The algorithms are then applied to study the diffusive dynamics of a tagged particle in a Lennard-Jones fluid, the properties of the associated random noise, and a decomposition of the corresponding memory kernel.
Two and Three-Dimensional Nonlocal DFT for Inhomogeneous Fluids I: Algorithms and Parallelization
Frink, Laura J. Douglas; Salinger, Andrew
1999-08-09
Fluids adsorbed near surfaces, macromolecules, and in porous materials are inhomogeneous, inhibiting spatially varying density distributions. This inhomogeneity in the fluid plays an important role in controlling a wide variety of complex physical phenomena including wetting, self-assembly, corrosion, and molecular recognition. One of the key methods for studying the properties of inhomogeneous fluids in simple geometries has been density functional theory (DFT). However, there has been a conspicuous lack of calculations in complex 2D and 3D geometries. The computational difficulty arises from the need to perform nested integrals that are due to nonlocal terms in the free energy functional These integral equations are expensive both in evaluation time and in memory requirements; however, the expense can be mitigated by intelligent algorithms and the use of parallel computers. This paper details our efforts to develop efficient numerical algorithms so that no local DFT calculations in complex geometries that require two or three dimensions can be performed. The success of this implementation will enable the study of solvation effects at heterogeneous surfaces, in zeolites, in solvated (bio)polymers, and in colloidal suspensions.
ADDRESSING ENVIRONMENTAL ENGINEERING CHALLENGES WITH COMPUTATIONAL FLUID DYNAMICS
This paper discusses the status and application of Computational Fluid Dynamics )CFD) models to address environmental engineering challenges for more detailed understanding of air pollutant source emissions, atmospheric dispersion and resulting human exposure. CFD simulations ...
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Dong, S.
2015-02-15
We present a family of physical formulations, and a numerical algorithm, based on a class of general order parameters for simulating the motion of a mixture of N (N⩾2) immiscible incompressible fluids with given densities, dynamic viscosities, and pairwise surface tensions. The N-phase formulations stem from a phase field model we developed in a recent work based on the conservations of mass/momentum, and the second law of thermodynamics. The introduction of general order parameters leads to an extremely strongly-coupled system of (N−1) phase field equations. On the other hand, the general form enables one to compute the N-phase mixing energy density coefficients in an explicit fashion in terms of the pairwise surface tensions. We show that the increased complexity in the form of the phase field equations associated with general order parameters in actuality does not cause essential computational difficulties. Our numerical algorithm reformulates the (N−1) strongly-coupled phase field equations for general order parameters into 2(N−1) Helmholtz-type equations that are completely de-coupled from one another. This leads to a computational complexity comparable to that for the simplified phase field equations associated with certain special choice of the order parameters. We demonstrate the capabilities of the method developed herein using several test problems involving multiple fluid phases and large contrasts in densities and viscosities among the multitude of fluids. In particular, by comparing simulation results with the Langmuir–de Gennes theory of floating liquid lenses we show that the method using general order parameters produces physically accurate results for multiple fluid phases.
Sorting on STAR. [CDC computer algorithm timing comparison
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Numerical simulation of landfill aeration using computational fluid dynamics.
Fytanidis, Dimitrios K; Voudrias, Evangelos A
2014-04-01
The present study is an application of Computational Fluid Dynamics (CFD) to the numerical simulation of landfill aeration systems. Specifically, the CFD algorithms provided by the commercial solver ANSYS Fluent 14.0, combined with an in-house source code developed to modify the main solver, were used. The unsaturated multiphase flow of air and liquid phases and the biochemical processes for aerobic biodegradation of the organic fraction of municipal solid waste were simulated taking into consideration their temporal and spatial evolution, as well as complex effects, such as oxygen mass transfer across phases, unsaturated flow effects (capillary suction and unsaturated hydraulic conductivity), temperature variations due to biochemical processes and environmental correction factors for the applied kinetics (Monod and 1st order kinetics). The developed model results were compared with literature experimental data. Also, pilot scale simulations and sensitivity analysis were implemented. Moreover, simulation results of a hypothetical single aeration well were shown, while its zone of influence was estimated using both the pressure and oxygen distribution. Finally, a case study was simulated for a hypothetical landfill aeration system. Both a static (steadily positive or negative relative pressure with time) and a hybrid (following a square wave pattern of positive and negative values of relative pressure with time) scenarios for the aeration wells were examined. The results showed that the present model is capable of simulating landfill aeration and the obtained results were in good agreement with corresponding previous experimental and numerical investigations.
Potential applications of computational fluid dynamics to biofluid analysis
NASA Technical Reports Server (NTRS)
Kwak, D.; Chang, J. L. C.; Rogers, S. E.; Rosenfeld, M.; Kwak, D.
1988-01-01
Computational fluid dynamics was developed to the stage where it has become an indispensable part of aerospace research and design. In view of advances made in aerospace applications, the computational approach can be used for biofluid mechanics research. Several flow simulation methods developed for aerospace problems are briefly discussed for potential applications to biofluids, especially to blood flow analysis.
New developments in adaptive methods for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Oden, J. T.; Bass, Jon M.
1990-01-01
New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.
Computational Fluid Dynamics-Based Design Optimization Method for Archimedes Screw Blood Pumps.
Yu, Hai; Janiga, Gábor; Thévenin, Dominique
2016-04-01
An optimization method suitable for improving the performance of Archimedes screw axial rotary blood pumps is described in the present article. In order to achieve a more robust design and to save computational resources, this method combines the advantages of the established pump design theory with modern computer-aided, computational fluid dynamics (CFD)-based design optimization (CFD-O) relying on evolutionary algorithms and computational fluid dynamics. The main purposes of this project are to: (i) integrate pump design theory within the already existing CFD-based optimization; (ii) demonstrate that the resulting procedure is suitable for optimizing an Archimedes screw blood pump in terms of efficiency. Results obtained in this study demonstrate that the developed tool is able to meet both objectives. Finally, the resulting level of hemolysis can be numerically assessed for the optimal design, as hemolysis is an issue of overwhelming importance for blood pumps.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
NASA Astrophysics Data System (ADS)
Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi
The COMPASS code is designed based on the moving particle semi-implicit method to simulate various complex mesoscale phenomena relevant to core disruptive accidents of sodium-cooled fast reactors. In this study, a computational framework for fluid-solid mixture flow simulations was developed for the COMPASS code. The passively moving solid model was used to simulate hydrodynamic interactions between fluid and solids. Mechanical interactions between solids were modeled by the distinct element method. A multi-time-step algorithm was introduced to couple these two calculations. The proposed computational framework for fluid-solid mixture flow simulations was verified by the comparison between experimental and numerical studies on the water-dam break with multiple solid rods.
Parallel algorithms and architecture for computation of manipulator forward dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.
Woodward, P. R.
2003-03-26
This report summarizes the results of the project entitled, ''Piecewise-Parabolic Methods for Parallel Computation with Applications to Unstable Fluid Flow in 2 and 3 Dimensions'' This project covers a span of many years, beginning in early 1987. It has provided over that considerable period the core funding to my research activities in scientific computation at the University of Minnesota. It has supported numerical algorithm development, application of those algorithms to fundamental fluid dynamics problems in order to demonstrate their effectiveness, and the development of scientific visualization software and systems to extract scientific understanding from those applications.
Some rotorcraft applications of computational fluid dynamics
NASA Technical Reports Server (NTRS)
Mccroskey, W. J.
1988-01-01
The growing application of computational aerodynamics to nonlinear rotorcraft problems is outlined, with particular emphasis on the development of new methods based on the Euler and thin-layer Navier-Stokes equations. Rotor airfoil characteristics can now be calculated accurately over a wide range of transonic flow conditions. However, unsteady 3-D viscous codes remain in the research stage, and a numerical simulation of the complete flow field about a helicopter in forward flight is not now feasible. Nevertheless, impressive progress is being made in preparation for future supercomputers that will enable meaningful calculations to be made for arbitrary rotorcraft configurations.
Computational fluid dynamics combustion analysis evaluation
NASA Technical Reports Server (NTRS)
Kim, Y. M.; Shang, H. M.; Chen, C. P.; Ziebarth, J. P.
1992-01-01
This study involves the development of numerical modelling in spray combustion. These modelling efforts are mainly motivated to improve the computational efficiency in the stochastic particle tracking method as well as to incorporate the physical submodels of turbulence, combustion, vaporization, and dense spray effects. The present mathematical formulation and numerical methodologies can be casted in any time-marching pressure correction methodologies (PCM) such as FDNS code and MAST code. A sequence of validation cases involving steady burning sprays and transient evaporating sprays will be included.
New SIMD Algorithms for Cluster Labeling on Parallel Computers
NASA Astrophysics Data System (ADS)
Apostolakis, John; Coddington, Paul; Marinari, Enzo
Cluster algorithms are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models of magnets. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two-dimensional Ising spin model. These algorithms could also be applied to other problems which use connected component labeling, such as percolation and image analysis.
NASA Astrophysics Data System (ADS)
Hari, Sridhar
2003-07-01
In this study, commercially available Computational Fluid Dynamics (CFD) software, CFX-4.4 has been used for the simulations of aerosol transport through various aerosol-sampling devices. Aerosol transport was modeled as a classical dilute and dispersed two-phase flow problem. Eulerian-Lagrangian framework was adopted wherein the fluid was treated as the continuous phase and aerosol as the dispersed phase, with a one-way coupling between the phases. Initially, performance of the particle transport algorithm implemented in the code was validated against available experimental and numerical data in the literature. Code predictions were found to be in good agreement against experimental data and previous numerical predictions. As a next step, the code was used as a tool to optimize the performance of a virtual impactor prototype. Suggestions on critical geometrical details available in the literature, for a virtual impactor, were numerically investigated on the prototype and the optimum set of parameters was determined. Performance curves were generated for the optimized design at various operating conditions. A computational model of the Linear Slot Virtual Impactor (LSVI) fabricated based on the optimization study, was constructed using the worst-case values of the measured geometrical parameters, with offsets in the horizontal and vertical planes. Simulations were performed on this model for the LSVI operating conditions. Behavior of various sized particles inside the impactor was illustrated with the corresponding particle tracks. Fair agreement was obtained between code predictions and experimental results. Important information on the virtual impactor performance, not known earlier, or, not reported in the literature in the past, obtained from this study, is presented. In the final part of this study, simulations on aerosol deposition in turbulent pipe flow were performed. Code predictions were found to be completely uncorrelated to experimental data. The
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
Computationally efficient algorithms for real-time attitude estimation
NASA Technical Reports Server (NTRS)
Pringle, Steven R.
1993-01-01
For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.
A novel bit-quad-based Euler number computing algorithm.
Yao, Bin; He, Lifeng; Kang, Shiying; Chao, Yuyan; Zhao, Xiao
2015-01-01
The Euler number of a binary image is an important topological property in computer vision and pattern recognition. This paper proposes a novel bit-quad-based Euler number computing algorithm. Based on graph theory and analysis on bit-quad patterns, our algorithm only needs to count two bit-quad patterns. Moreover, by use of the information obtained during processing the previous bit-quad, the average number of pixels to be checked for processing a bit-quad is only 1.75. Experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms.
Some Contributions to Computational Fluid Dynamics.
NASA Astrophysics Data System (ADS)
Miller, Harvey Philip
A three-dimensional, time-dependent free surface model has been developed for predicting the velocity field and surface height variations in a tidal bay. An explicit finite difference numerical solution is obtained by transforming the vertical coordinate in the governing model equations. The ocean-bay interface open boundary condition is incorporated without approximation into the hydrodynamic model by employing a staggered grid Richardson lattice. The momentum equations ignore horizontal diffusion, which is justifiably small for the South Biscayne Bay. Another three-dimensional, time-dependent free surface model for the South Biscayne Bay is used for application to suspended particles transport. A unique mass-conserving numerical model is used for solving the concentration equation by an explicit finite difference scheme. The effects of constant particle settling velocity and bottom bed deposition rate are compared and discussed. For convection dominated coastal flows, the flux -corrected transport (FCT) method is compared with other low-dispersive, explicit finite difference schemes for the two-dimensional linear advection of 2-D gaussian initial temperature distributions of various half-widths. The flow field is specified a-priori as consisting of a slowly varying, oscillating, uniform x-component of velocity, and a constant y-component of velocity. This type of flow field is typically encountered in near-coastal waters. The artificial numerical effects of diffusion (dissipation), dispersion, and anisotropy are discussed. Finally, two-dimensional linear advection solutions of transported fluid temperature are explored by implementing high resolution, high order explicit finite difference schemes. A comparison of the flux-corrected transport (FCT) methods is made with other total variation diminishing (TVD) schemes for the 2-D gaussian initial temperature distributions of various half-widths. Further clipping of the sharply peaked gaussian distribution in 2-D
A Textbook for a First Course in Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zingg, D. W.; Pulliam, T. H.; Nixon, David (Technical Monitor)
1999-01-01
This paper describes and discusses the textbook, Fundamentals of Computational Fluid Dynamics by Lomax, Pulliam, and Zingg, which is intended for a graduate level first course in computational fluid dynamics. This textbook emphasizes fundamental concepts in developing, analyzing, and understanding numerical methods for the partial differential equations governing the physics of fluid flow. Its underlying philosophy is that the theory of linear algebra and the attendant eigenanalysis of linear systems provides a mathematical framework to describe and unify most numerical methods in common use in the field of fluid dynamics. Two linear model equations, the linear convection and diffusion equations, are used to illustrate concepts throughout. Emphasis is on the semi-discrete approach, in which the governing partial differential equations (PDE's) are reduced to systems of ordinary differential equations (ODE's) through a discretization of the spatial derivatives. The ordinary differential equations are then reduced to ordinary difference equations (O(Delta)E's) using a time-marching method. This methodology, using the progression from PDE through ODE's to O(Delta)E's, together with the use of the eigensystems of tridiagonal matrices and the theory of O(Delta)E's, gives the book its distinctiveness and provides a sound basis for a deep understanding of fundamental concepts in computational fluid dynamics.
Use of computational fluid dynamics in respiratory medicine.
Fernández Tena, Ana; Casan Clarà, Pere
2015-06-01
Computational Fluid Dynamics (CFD) is a computer-based tool for simulating fluid movement. The main advantages of CFD over other fluid mechanics studies include: substantial savings in time and cost, the analysis of systems or conditions that are very difficult to simulate experimentally (as is the case of the airways), and a practically unlimited level of detail. We used the Ansys-Fluent CFD program to develop a conducting airway model to simulate different inspiratory flow rates and the deposition of inhaled particles of varying diameters, obtaining results consistent with those reported in the literature using other procedures. We hope this approach will enable clinicians to further individualize the treatment of different respiratory diseases.
A perspective on high-order methods in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Wang, ZhiJian
2016-01-01
There has been an intensive international effort to develop high-order Computational Fluid Dynamics (CFD) methods into design tools in aerospace engineering during the last one and half decades. These methods offer the potential to significantly improve solution accuracy and efficiency for vortex dominated turbulent flows. Enough progresses have been made in algorithm development, mesh generation and parallel computing that these methods are on the verge of being applied in a production design environment. Since many review papers have been written on the subject, I decide to offer a personal perspective on the state-of-the-art in high-order CFD methods and the challenges that must be overcome.
Current capabilities and future directions in computational fluid dynamics
NASA Technical Reports Server (NTRS)
1986-01-01
A summary of significant findings is given, followed by specific recommendations for future directions of emphasis for computational fluid dynamics development. The discussion is organized into three application areas: external aerodynamics, hypersonics, and propulsion - and followed by a turbulence modeling synopsis.
Parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Amin-Javaheri, Masoud; Orin, David E.
1989-01-01
The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.
Woodruff, S.B.
1992-01-01
The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.
Morphing-Based Shape Optimization in Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Rousseau, Yannick; Men'Shov, Igor; Nakamura, Yoshiaki
In this paper, a Morphing-based Shape Optimization (MbSO) technique is presented for solving Optimum-Shape Design (OSD) problems in Computational Fluid Dynamics (CFD). The proposed method couples Free-Form Deformation (FFD) and Evolutionary Computation, and, as its name suggests, relies on the morphing of shape and computational domain, rather than direct shape parameterization. Advantages of the FFD approach compared to traditional parameterization are first discussed. Then, examples of shape and grid deformations by FFD are presented. Finally, the MbSO approach is illustrated and applied through an example: the design of an airfoil for a future Mars exploration airplane.
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
On the Use of Computers for Teaching Fluid Mechanics
NASA Technical Reports Server (NTRS)
Benson, Thomas J.
1994-01-01
Several approaches for improving the teaching of basic fluid mechanics using computers are presented. There are two objectives to these approaches: to increase the involvement of the student in the learning process and to present information to the student in a variety of forms. Items discussed include: the preparation of educational videos using the results of computational fluid dynamics (CFD) calculations, the analysis of CFD flow solutions using workstation based post-processing graphics packages, and the development of workstation or personal computer based simulators which behave like desk top wind tunnels. Examples of these approaches are presented along with observations from working with undergraduate co-ops. Possible problems in the implementation of these approaches as well as solutions to these problems are also discussed.
Algorithm implementation on the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Zang, Thomas A.
1987-01-01
The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.
Multiscale Computational Modeling of Bio-fluids in Real Anatomies and Microdevices
NASA Astrophysics Data System (ADS)
Trebotich, David; Miller, Greg
2004-11-01
We present new simulation results of bio-fluids in microfluidic devices and real anatomies using recently developed state-of-the-art computational fluid dynamics algorithms. These results include flows of both Newtonian and non-Newtonian (viscoelastic) continua as well as discrete particle chains embedded in the continuum. The flow domains considered for continuum flow are a stenotic carotid artery and a trachea which has undergone tracheostomy, where both geometries have been obtained from MRI images. These anatomical flows are highly resolved in both 2D and 3D. We also model DNA molecules in solution flowing through an extraction device used for amplification. We use a particle method where molecular chains are tightly coupled to the continuum via a hydrodynamic drag law such that the bulk fluid feels the effect of the particles.
Using artificial intelligence to control fluid flow computations
NASA Technical Reports Server (NTRS)
Gelsey, Andrew
1992-01-01
Computational simulation is an essential tool for the prediction of fluid flow. Many powerful simulation programs exist today. However, using these programs to reliably analyze fluid flow and other physical situations requires considerable human effort and expertise to set up a simulation, determine whether the output makes sense, and repeatedly run the simulation with different inputs until a satisfactory result is achieved. Automating this process is not only of considerable practical importance but will also significantly advance basic artificial intelligence (AI) research in reasoning about the physical world.
Application of computational fluid mechanics to atmospheric pollution problems
NASA Technical Reports Server (NTRS)
Hung, R. J.; Liaw, G. S.; Smith, R. E.
1986-01-01
One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.
Computer program for computing the properties of seventeen fluids. [cryogenic liquids
NASA Technical Reports Server (NTRS)
Brennan, J. A.; Friend, D. G.; Arp, V. D.; Mccarty, R. D.
1992-01-01
The present study describes modifications and additions to the MIPROPS computer program for calculating the thermophysical properties of 17 fluids. These changes include adding new fluids, new properties, and a new interface to the program. The new program allows the user to select the input and output parameters and the units to be displayed for each parameter. Fluids added to the MIPROPS program are carbon dioxide, carbon monoxide, deuterium, helium, normal hydrogen, and xenon. The most recent modifications to the MIPROPS program are the addition of viscosity and thermal conductivity correlations for parahydrogen and the addition of the fluids normal hydrogen and xenon. The recently added interface considerably increases the program's utility.
Genetic algorithms in a distributed computing environment using PVM
Cronje, G.A.; Steeb, W.H.
1997-04-01
The Parallel Virtual Machine (PVM) is a software system that enables a collection of heterogeneous computer systems to be used as a coherent and flexible concurrent computation resource. We show that genetic algorithms can be implemented using a Parallel Virtual Machine and C++. Problems with constraints are also discussed.
Limited-data computed tomography algorithms for the physical sciences.
Verhoeven, D
1993-07-10
Five limited-data computed tomography algorithms are compared. The algorithms used are adapted versions of the algebraic reconstruction technique, the multiplicative algebraic reconstruction technique, the Gerchberg-Papoulis algorithm, a spectral extrapolation algorithm descended from that of Harris [J. Opt. Soc. Am. 54, 931-936 (1964)], and an algorithm based on the singular value decomposition technique. These algorithms were used to reconstruct phantom data with realistic levels of noise from a number of different imaging geometries. The phantoms, the imaging geometries, and the noise were chosen to simulate the conditions encountered in typical computed tomography applications in the physical sciences, and the implementations of the algorithms were optimized for these applications. The multiplicative algebraic reconstruction technique algorithm gave the best results overall; the algebraic reconstruction technique gave the best results for very smooth objects or very noisy (20-dB signal-to-noise ratio) data. My implementations of both of these algorithms incorporate apriori knowledge of the sign of the object, its extent, and its smoothness. The smoothness of the reconstruction is enforced through the use of an appropriate object model (by use of cubic B-spline basis functions and a number of object coefficients appropriate to the object being reconstructed). The average reconstruction error was 1.7% of the maximum phantom value with the multiplicative algebraic reconstruction technique of a phantom with moderate-to-steep gradients by use of data from five viewing angles with a 30-dB signal-to-noise ratio.
Modeling fluid dynamics on type II quantum computers
NASA Astrophysics Data System (ADS)
Scoville, James; Weeks, David; Yepez, Jeffrey
2006-03-01
A quantum algorithm is presented for modeling the time evolution of density and flow fields governed by classical equations, such as the diffusion equation, the nonlinear Burgers equation, and the damped wave equation. The algorithm is intended to run on a type-II quantum computer, a parallel quantum computer consisting of a lattice of small type I quantum computers undergoing unitary evolution and interacting via information interchanges represented by an orthogonal matrices. Information is effectively transferred between adjacent quantum computers over classical communications channels because of controlled state demolition following local quantum mechanical qubit-qubit interactions within each quantum computer. The type-II quantum algorithm presented in this paper describes a methodology for generating quantum logic operations as a generalization of classical operations associated with finite-point group symmetries. The quantum mechanical evolution of multiple qubits within each node is described. Presented is a proof that the parallel quantum system obeys a finite-difference quantum Boltzman equation at the mesoscopic scale, leading in turn to various classical linear and nonlinear effective field theories at the macroscopic scale depending on the details of the local qubit-qubit interactions.
Iterative restoration algorithms for nonlinear constraint computing
NASA Astrophysics Data System (ADS)
Szu, Harold
A general iterative-restoration principle is introduced to facilitate the implementation of nonlinear optical processors. The von Neumann convergence theorem is generalized to include nonorthogonal subspaces which can be reduced to a special orthogonal projection operator by applying an orthogonality condition. This principle is shown to permit derivation of the Jacobi algorithm, the recursive principle, the van Cittert (1931) deconvolution method, the iteration schemes of Gerchberg (1974) and Papoulis (1975), and iteration schemes using two Fourier conjugate domains (e.g., Fienup, 1981). Applications to restoring the image of a double star and division by hard and soft zeros are discussed, and sample results are presented graphically.
Decomposition algorithms for stochastic programming on a computational grid.
Linderoth, J.; Wright, S.; Mathematics and Computer Science; Axioma Inc.
2003-01-01
We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.
Some Computer Algorithms to Implement a Reliability Shorthand.
1982-10-01
AD-A123 781 SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIAILITY /I I SHORTHAND(U) N VAL POSTGRADUATE SCHOOL MONTEREY CA UNCLASSIFIED SGREOC82F/G 12...California THESIS SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIABILITY SHORTHAND Sadan Gursel October 1982 JAN 26I A :: Thesis Advisor: J. D. Esary...DOCMEWTATION PAGE ISSFORK COMPLZT’Nc FORM .REPORTNMU1EUGW CKO N.3 19IiNI CATALOG mao d. TMTE (od Sid"Ifte) $. ?’V9E OF 1119000 & PEUoOŔ COVERED Some Computer
Limited-data computed tomograpy algorithms for the physical sciences
NASA Astrophysics Data System (ADS)
Verhoeven, Dean
1993-07-01
Results are presented from a comparison of implementations of five computed tomography algorithms which were either designed expressly to work with, or have been shown to work with, limited data and which may be applied to a wide variety of objects. These include the adapted versions of the algebraic reconstruction technique, the multiplicative algebraic reconstruction technique (MART), the Gerchberg-Papoulis algorithgm, a spectral extrapolation algorithm derived from that of Harris (1964), and an algorithm based on the singular value decomposition technique. The algorithms were used to reconstruct phantom data with realistic levels of noise from a number of different imaging geometries. It was found that the MART algorithm has a combination of advantages that makes it superior to other algorithms tested.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
A computational model for doctoring fluid films in gravure printing
NASA Astrophysics Data System (ADS)
Hariprasad, Daniel S.; Grau, Gerd; Schunk, P. Randall; Tjiptowidjojo, Kristianto
2016-04-01
The wiping, or doctoring, process in gravure printing presents a fundamental barrier to resolving the micron-sized features desired in printed electronics applications. This barrier starts with the residual fluid film left behind after wiping, and its importance grows as feature sizes are reduced, especially as the feature size approaches the thickness of the residual fluid film. In this work, various mechanical complexities are considered in a computational model developed to predict the residual fluid film thickness. Lubrication models alone are inadequate, and deformation of the doctor blade body together with elastohydrodynamic lubrication must be considered to make the model predictive of experimental trends. Moreover, model results demonstrate that the particular form of the wetted region of the blade has a significant impact on the model's ability to reproduce experimental measurements.
Remote Visualization and Remote Collaboration On Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).
Progress and future directions in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Kutler, Paul; Gross, Anthony R.
1988-01-01
Computational fluid dynamics (CFD) has made great strides in the detailed simulation of complex fluid flows, including the fluid physics of flows heretofore not understood. It is now being routinely applied to some rather complicated problems, and starting to impact the design cycle of aerospace vehicles and their components. In addition, it is being used to complement and is being complemented by experimental studies. In this paper some major elements of contemporary CFD research, such as code validation, turbulence physics, and hypersonic flows are discussed, along with a review of the principal pacing items that currently govern CFD. Several examples are presented to illustrate the current state of the art. Finally, prospects for the future of the development and application of CFD are suggested.
Computational fluid dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Kutler, Paul
1989-01-01
Computational fluid dynamics (CFD) has made great strides in the detailed simulation of complex fluid flows, including the fluid physics of flows heretofore not understood. It is now being routinely applied to some rather complicated problems, and starting to impact the design cycle of aerospace flight vehicles and their components. In addition, it is being used to complement, and is being complemented by, experimental studies. In the present paper, some major elements of contemporary CFD research, such as code validation, turbulence physics, and hypersonic flows are discussed, along with a review of the principal pacing items that currently govern CFD. Several examples of pioneering CFD research are presented to illustrate the current state of the art. Finally, prospects for the future development and application of CFD are suggested.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1991-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1990-01-01
Spurious stable as well as unstable steady state numerical solutions, spurious asymptotic numerical solutions of higher period, and even stable chaotic behavior can occur when finite difference methods are used to solve nonlinear differential equations (DE) numerically. The occurrence of spurious asymptotes is independent of whether the DE possesses a unique steady state or has additional periodic solutions and/or exhibits chaotic phenomena. The form of the nonlinear DEs and the type of numerical schemes are the determining factor. In addition, the occurrence of spurious steady states is not restricted to the time steps that are beyond the linearized stability limit of the scheme. In many instances, it can occur below the linearized stability limit. Therefore, it is essential for practitioners in computational sciences to be knowledgeable about the dynamical behavior of finite difference methods for nonlinear scalar DEs before the actual application of these methods to practical computations. It is also important to change the traditional way of thinking and practices when dealing with genuinely nonlinear problems. In the past, spurious asymptotes were observed in numerical computations but tended to be ignored because they all were assumed to lie beyond the linearized stability limits of the time step parameter delta t. As can be seen from the study, bifurcations to and from spurious asymptotic solutions and transitions to computational instability not only are highly scheme dependent and problem dependent, but also initial data and boundary condition dependent, and not limited to time steps that are beyond the linearized stability limit.
Computational Aspects of Realization & Design Algorithms in Linear Systems Theory.
NASA Astrophysics Data System (ADS)
Tsui, Chia-Chi
Realization and design problems are two major problems in linear time-invariant systems control theory and have been solved theoretically. However, little is understood about their numerical properties. Due to the large scale of the problem and the finite precision of computer computation, it is very important and is the purpose of this study to investigate the computational reliability and efficiency of the algorithms for these two problems. In this dissertation, a reliable algorithm to achieve canonical form realization via Hankel matrix is developed. A comparative study of three general realization algorithms, for both numerical reliability and efficiency, shows that the proposed algorithm (via Hankel matrix) is the most preferable one among the three. The design problems, such as the state feedback design for pole placement, the state observer design, and the low order single and multi-functional observer design, have been solved by using canonical form systems matrices. In this dissertation, a set of algorithms for solving these three design problems is developed and analysed. These algorithms are based on Hessenberg form systems matrices which are numerically more reliable to compute than the canonical form systems matrices.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Computational Fluid Dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Kutler, Paul
1994-01-01
Computational fluid dynamics (CFD) is beginning to play a major role in the aircraft industry of the United States because of the realization that CFD can be a new and effective design tool and thus could provide a company with a competitive advantage. It is also playing a significant role in research institutions, both governmental and academic, as a tool for researching new fluid physics, as well as supplementing and complementing experimental testing. In this presentation, some of the progress made to date in CFD at NASA Ames will be reviewed. The presentation addresses the status of CFD in terms of methods, examples of CFD solutions, and computer technology. In addition, the role CFD will play in supporting the revolutionary goals set forth by the Aeronautical Policy Review Committee established by the Office of Science and Technology Policy is noted. The need for validated CFD tools is also briefly discussed.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Unified computational method for design of fluid loop systems
NASA Astrophysics Data System (ADS)
Furukawa, Masao
1991-12-01
Various kinds of empirical formulas of Nusselt numbers, fanning friction factors, and pressure loss coefficients were collected and reviewed with the object of constructing a common basis of design calculations of pumped fluid loop systems. The practical expressions obtained after numerical modifications are listed in tables with identification numbers corresponding to configurations of the flow passages. Design procedure of a cold plate and of a space radiator are clearly shown in a series of mathematical relations coupled with a number of detailed expressions which are put in the tables in order of numerical computations. Weight estimate models and several pump characteristics are given in the tables as a result of data regression. A unified computational method based upon the above procedure is presented for preliminary design analyses of a fluid loop system consisting of cold plates, plane radiators, mechanical pumps, valves, and so on.
Computational Algorithms for Device-Circuit Coupling
KEITER, ERIC R.; HUTCHINSON, SCOTT A.; HOEKSTRA, ROBERT J.; RANKIN, ERIC LAMONT; RUSSO, THOMAS V.; WATERS, LON J.
2003-01-01
Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.
Validation of Computational Fluid Dynamics Simulations for Realistic Flows (Preprint)
2007-12-01
these calculations, the reference length is the vortex core radius, the reference flow conditions are the free stream conditions with the Mach number M...currently valid OMB control number . PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED...From - To) 11-10-2007 Technical Paper & Briefing Charts 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Validation of Computational Fluid Dynamics
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
Computational fluid dynamics applications at McDonnel Douglas
NASA Technical Reports Server (NTRS)
Hakkinen, R. J.
1987-01-01
Representative examples are presented of applications and development of advanced Computational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.
Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Cheatwood, F. McNeil
1997-01-01
The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.
Data bank homology search algorithm with linear computation complexity.
Strelets, V B; Ptitsyn, A A; Milanesi, L; Lim, H A
1994-06-01
A new algorithm for data bank homology search is proposed. The principal advantages of the new algorithm are: (i) linear computation complexity; (ii) low memory requirements; and (iii) high sensitivity to the presence of local region homology. The algorithm first calculates indicative matrices of k-tuple 'realization' in the query sequence and then searches for an appropriate number of matching k-tuples within a narrow range in database sequences. It does not require k-tuple coordinates tabulation and in-memory placement for database sequences. The algorithm is implemented in a program for execution on PC-compatible computers and tested on PIR and GenBank databases with good results. A few modifications designed to improve the selectivity are also discussed. As an application example, the search for homology of the mouse homeotic protein HOX 3.1 is given.
A computer algorithm for automatic beam steering
Drennan, E.
1992-06-01
Beam steering is done by modifying the current in a trim or bending magnet. If the current change is the right amount the beam can be made to bend in such a manner that it will hit a swic or BPM downstream from the magnet at a predetermined set point. Although both bending magnets and trim magnets can be used to modify beam angle, beam steering is usually done with trim magnets. This is so because, during beam steering the beam angle is usually modified only by a small amount which can be easily achieved with a trim magnet. Thus in this note, all steering magnets will be assumed to be trim magnets. There are two ways of monitoring beam position. One way is done using a BPM and the other is done using a swic. For simplicity, beam position monitoring in this paper will be referred to being done with a swic. Beam steering can be done manually by changing the current through a trim magnet and monitoring the position of the beam downstream from the magnet with a swic. Alternatively the beam can be positioned automatically using a computer which periodically updates the current through a specific number of trim magnets. The purpose of this note is to describe the steps involved in coming up with such a computer program. There are two main aspects to automatic beam steering. First a relationship between the beam position and the bending magnet is needed. Secondly a beamline setup of swics and trim magnets has to be chosen that will position the beam according to the desired specifications. A simple example will be looked at that will show that once a mathematical relationship between the needed change of the beam position on a swic and the change in trim currents is established, a computer could be programmed to calculate and update the trim currents.
Gradient Learning Algorithms for Ontology Computing
Gao, Wei; Zhu, Linli
2014-01-01
The gradient learning model has been raising great attention in view of its promising perspectives for applications in statistics, data dimensionality reducing, and other specific fields. In this paper, we raise a new gradient learning model for ontology similarity measuring and ontology mapping in multidividing setting. The sample error in this setting is given by virtue of the hypothesis space and the trick of ontology dividing operator. Finally, two experiments presented on plant and humanoid robotics field verify the efficiency of the new computation model for ontology similarity measure and ontology mapping applications in multidividing setting. PMID:25530752
LAWS simulation: Sampling strategies and wind computation algorithms
NASA Technical Reports Server (NTRS)
Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.
1989-01-01
In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.
Parallel grid generation algorithm for distributed memory computers
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
Multidomain solution algorithm for potential flow computations around complex configurations
NASA Astrophysics Data System (ADS)
Jacquotte, Olivier-Pierre; Godard, Jean-Luc
1994-04-01
A method is presented for the computation of irrotational transonic flows of perfect gas around a wide class of geometries. It is based on the construction of a multidomain structured grid and then on the solution of the full potential equation discretized with finite elements. The novelty of the paper is the combination of three embedded algorithms: a mixed fixed-point/Newton algorithm to treat the non-linearity, a multidomain conjugate gradient algorithm to handle the grid topology and another conjugate gradient algorithm in each of the structured domains. This method has made possible the calculations of flows around geometries that cannot be treated in a structured approach without the multidomain algorithm; an application of this method to the study of the wing-pylon-nacelle interactions is presented.
An Agent Inspired Reconfigurable Computing Implementation of a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Weir, John M.; Wells, B. Earl
2003-01-01
Many software systems have been successfully implemented using an agent paradigm which employs a number of independent entities that communicate with one another to achieve a common goal. The distributed nature of such a paradigm makes it an excellent candidate for use in high speed reconfigurable computing hardware environments such as those present in modem FPGA's. In this paper, a distributed genetic algorithm that can be applied to the agent based reconfigurable hardware model is introduced. The effectiveness of this new algorithm is evaluated by comparing the quality of the solutions found by the new algorithm with those found by traditional genetic algorithms. The performance of a reconfigurable hardware implementation of the new algorithm on an FPGA is compared to traditional single processor implementations.
NASA Technical Reports Server (NTRS)
Mccroskey, W. J.
1986-01-01
The Fluid Dynamics Panel of AGARD arranged a Symposium on Applications of Computational Fluid Dynamics in Aeronautics, on 7 to 10 April 1986 in Aix-en-Provence, France. The purpose of the Symposium was to provide an assessment of the status of CFD in aerodynamic design and analysis, with an emphasis on emerging applications of advanced computational techniques to complex configurations. Sessions were devoted specifically to grid generation, methods for inviscid flows, calculations of viscous-inviscid interactions, and methods for solving the Navier-Stokes equations. The 31 papers presented at the meeting are published in AGARD Conference Proceedings CP-412 and are listed in the Appendix of this report. A brief synopsis of each paper and some general conclusions and recommendations are given.
A computational study of routing algorithms for realistic transportation networks
Jacob, R.; Marathe, M.V.; Nagel, K.
1998-12-01
The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.
A fast algorithm for sparse matrix computations related to inversion
Li, S.; Wu, W.; Darve, E.
2013-06-01
We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over
Parallel matrix transpose algorithms on distributed memory concurrent computers
Choi, J.; Walker, D.W.; Dongarra, J.J. |
1993-10-01
This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. It is assumed that the matrix is distributed over a P x Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The communication schemes of the algorithms are determined by the greatest common divisor (GCD) of P and Q. If P and Q are relatively prime, the matrix transpose algorithm involves complete exchange communication. If P and Q are not relatively prime, processors are divided into GCD groups and the communication operations are overlapped for different groups of processors. Processors transpose GCD wrapped diagonal blocks simultaneously, and the matrix can be transposed with LCM/GCD steps, where LCM is the least common multiple of P and Q. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A{center_dot}B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T}{center_dot}B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.
State-of-the-art review of computational fluid dynamics modeling for fluid-solids systems
Lyczkowski, R.W.; Bouillard, J.X.; Ding, J.; Chang, S.L.; Burge, S.W.
1994-05-12
As the result of 15 years of research (50 staff years of effort) Argonne National Laboratory (ANL), through its involvement in fluidized-bed combustion, magnetohydrodynamics, and a variety of environmental programs, has produced extensive computational fluid dynamics (CFD) software and models to predict the multiphase hydrodynamic and reactive behavior of fluid-solids motions and interactions in complex fluidized-bed reactors (FBRS) and slurry systems. This has resulted in the FLUFIX, IRF, and SLUFIX computer programs. These programs are based on fluid-solids hydrodynamic models and can predict information important to the designer of atmospheric or pressurized bubbling and circulating FBR, fluid catalytic cracking (FCC) and slurry units to guarantee optimum efficiency with minimum release of pollutants into the environment. This latter issue will become of paramount importance with the enactment of the Clean Air Act Amendment (CAAA) of 1995. Solids motion is also the key to understanding erosion processes. Erosion rates in FBRs and pneumatic and slurry components are computed by ANL`s EROSION code to predict the potential metal wastage of FBR walls, intervals, feed distributors, and cyclones. Only the FLUFIX and IRF codes will be reviewed in the paper together with highlights of the validations because of length limitations. It is envisioned that one day, these codes with user-friendly pre and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.
Moon, Ji Young; Suh, Dae Chul; Lee, Yong Sang; Kim, Young Woo; Lee, Joon Sang
2014-02-01
Despite recent development of computational fluid dynamics (CFD) research, analysis of computational fluid dynamics of cerebral vessels has several limitations. Although blood is a non-Newtonian fluid, velocity and pressure fields were computed under the assumptions of incompressible, laminar, steady-state flows and Newtonian fluid dynamics. The pulsatile nature of blood flow is not properly applied in inlet and outlet boundaries. Therefore, we present these technical limitations and discuss the possible solution by comparing the theoretical and computational studies.
Applying uncertainty quantification to multiphase flow computational fluid dynamics
Gel, A; Garg, R; Tong, C; Shahnam, M; Guenther, C
2013-07-01
Multiphase computational fluid dynamics plays a major role in design and optimization of fossil fuel based reactors. There is a growing interest in accounting for the influence of uncertainties associated with physical systems to increase the reliability of computational simulation based engineering analysis. The U.S. Department of Energy's National Energy Technology Laboratory (NETL) has recently undertaken an initiative to characterize uncertainties associated with computer simulation of reacting multiphase flows encountered in energy producing systems such as a coal gasifier. The current work presents the preliminary results in applying non-intrusive parametric uncertainty quantification and propagation techniques with NETL's open-source multiphase computational fluid dynamics software MFIX. For this purpose an open-source uncertainty quantification toolkit, PSUADE developed at the Lawrence Livermore National Laboratory (LLNL) has been interfaced with MFIX software. In this study, the sources of uncertainty associated with numerical approximation and model form have been neglected, and only the model input parametric uncertainty with forward propagation has been investigated by constructing a surrogate model based on data-fitted response surface for a multiphase flow demonstration problem. Monte Carlo simulation was employed for forward propagation of the aleatory type input uncertainties. Several insights gained based on the outcome of these simulations are presented such as how inadequate characterization of uncertainties can affect the reliability of the prediction results. Also a global sensitivity study using Sobol' indices was performed to better understand the contribution of input parameters to the variability observed in response variable.
Computational fluid dynamics research and applications at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr.
1989-01-01
Information on computational fluid dynamics (CFD) research and applications carried out at the NASA Langley Research Center is given in viewgraph form. The Langley CFD strategy, the five-year plan in CFD and flow physics, 3-block grid topology, the effect of a patching algorithm, F-18 surface flow, entropy and vorticity effects that improve accuracy of unsteady transonic small disturbance theory, and the effects of reduced frequency on first harmonic components of unsteady pressures due to airfoil pitching are among the topics covered.
Issues in computational fluid dynamics code verification and validation
Oberkampf, W.L.; Blottner, F.G.
1997-09-01
A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.
Computation of Coupled Thermal-Fluid Problems in Distributed Memory Environment
NASA Technical Reports Server (NTRS)
Wei, H.; Shang, H. M.; Chen, Y. S.
2001-01-01
The thermal-fluid coupling problems are very important to aerospace and engineering applications. Instead of analyzing heat transfer and fluid flow separately, this study merged two well-accepted engineering solution methods, SINDA for thermal analysis and FDNS for fluid flow simulation, into a unified multi-disciplinary thermal fluid prediction method. A fully conservative patched grid interface algorithm for arbitrary two-dimensional and three-dimensional geometry has been developed. The state-of-the-art parallel computing concept was used to couple SINDA and FDNS for the communication of boundary conditions through PVM (Parallel Virtual Machine) libraries. Therefore, the thermal analysis performed by SINDA and the fluid flow calculated by FDNS are fully coupled to obtain steady state or transient solutions. The natural convection between two thick-walled eccentric tubes was calculated and the predicted results match the experiment data perfectly. A 3-D rocket engine model and a real 3-D SSME geometry were used to test the current model, and the reasonable temperature field was obtained.
Computational fluid dynamics capability for the solid-fuel ramjet projectile
NASA Astrophysics Data System (ADS)
Nusca, Michael J.; Chakravarthy, Sukumar R.; Goldberg, Uriel C.
1990-06-01
A computational fluid dynamics solution of the Navier-Stokes equations has been applied to the internal and external flow of inert solid-fuel ramjet projectiles. Computational modeling reveals internal flowfield details not attainable by flight or wind tunnel measurements, thus contributing to the current investigation into the flight performance of solid-fuel ramjet projectiles. The present code employs numerical algorithms termed total variational diminishing (TVD). Computational solutions indicate the importance of several special features of the code including the zonal grid framework, the TVD scheme, and a recently developed backflow turbulence model. The solutions are compared with results of internal surface pressure measurements. As demonstrated by these comparisons, the use of a backflow turbulence model distinguishes between satisfactory and poor flowfield predictions.
NASA Astrophysics Data System (ADS)
Emelyanov, V. N.; Karpenko, A. G.; Volkov, K. N.
2015-06-01
Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of internal fluid flows are discussed. The finite volume method is applied to solve three-dimensional (3D) unsteady compressible Euler and Navier-Stokes equations on unstructured meshes. Compute Inified Device Architecture (CUDA) technology is used for programming implementation of parallel computational algorithms. Solution of some fluid dynamics problems on GPUs is presented and approaches to optimization of the CFD code related to the use of different types of memory are discussed. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. Performance measurements show that numerical schemes developed achieve 20 to 50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Plagiarism Detection Algorithm for Source Code in Computer Science Education
ERIC Educational Resources Information Center
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Splign: algorithms for computing spliced alignments with identification of paralogs
Kapustin, Yuri; Souvorov, Alexander; Tatusova, Tatiana; Lipman, David
2008-01-01
Background The computation of accurate alignments of cDNA sequences against a genome is at the foundation of modern genome annotation pipelines. Several factors such as presence of paralogs, small exons, non-consensus splice signals, sequencing errors and polymorphic sites pose recognized difficulties to existing spliced alignment algorithms. Results We describe a set of algorithms behind a tool called Splign for computing cDNA-to-Genome alignments. The algorithms include a high-performance preliminary alignment, a compartment identification based on a formally defined model of adjacent duplicated regions, and a refined sequence alignment. In a series of tests, Splign has produced more accurate results than other tools commonly used to compute spliced alignments, in a reasonable amount of time. Conclusion Splign's ability to deal with various issues complicating the spliced alignment problem makes it a helpful tool in eukaryotic genome annotation processes and alternative splicing studies. Its performance is enough to align the largest currently available pools of cDNA data such as the human EST set on a moderate-sized computing cluster in a matter of hours. The duplications identification (compartmentization) algorithm can be used independently in other areas such as the study of pseudogenes. Reviewers This article was reviewed by: Steven Salzberg, Arcady Mushegian and Andrey Mironov (nominated by Mikhail Gelfand). PMID:18495041
Computations and algorithms in physical and biological problems
NASA Astrophysics Data System (ADS)
Qin, Yu
This dissertation presents the applications of state-of-the-art computation techniques and data analysis algorithms in three physical and biological problems: assembling DNA pieces, optimizing self-assembly yield, and identifying correlations from large multivariate datasets. In the first topic, in-depth analysis of using Sequencing by Hybridization (SBH) to reconstruct target DNA sequences shows that a modified reconstruction algorithm can overcome the theoretical boundary without the need for different types of biochemical assays and is robust to error. In the second topic, consistent with theoretical predictions, simulations using Graphics Processing Unit (GPU) demonstrate how controlling the short-ranged interactions between particles and controlling the concentrations optimize the self-assembly yield of a desired structure, and nonequilibrium behavior when optimizing concentrations is also unveiled by leveraging the computation capacity of GPUs. In the last topic, a methodology to incorporate existing categorization information into the search process to efficiently reconstruct the optimal true correlation matrix for multivariate datasets is introduced. Simulations on both synthetic and real financial datasets show that the algorithm is able to detect signals below the Random Matrix Theory (RMT) threshold. These three problems are representatives of using massive computation techniques and data analysis algorithms to tackle optimization problems, and outperform theoretical boundary when incorporating prior information into the computation.
Parallelization of Nullspace Algorithm for the computation of metabolic pathways.
Jevremović, Dimitrije; Trinh, Cong T; Srienc, Friedrich; Sosa, Carlos P; Boley, Daniel
2011-06-01
Elementary mode analysis is a useful metabolic pathway analysis tool in understanding and analyzing cellular metabolism, since elementary modes can represent metabolic pathways with unique and minimal sets of enzyme-catalyzed reactions of a metabolic network under steady state conditions. However, computation of the elementary modes of a genome- scale metabolic network with 100-1000 reactions is very expensive and sometimes not feasible with the commonly used serial Nullspace Algorithm. In this work, we develop a distributed memory parallelization of the Nullspace Algorithm to handle efficiently the computation of the elementary modes of a large metabolic network. We give an implementation in C++ language with the support of MPI library functions for the parallel communication. Our proposed algorithm is accompanied with an analysis of the complexity and identification of major bottlenecks during computation of all possible pathways of a large metabolic network. The algorithm includes methods to achieve load balancing among the compute-nodes and specific communication patterns to reduce the communication overhead and improve efficiency.
Optimization of computer-generated binary holograms using genetic algorithms
NASA Astrophysics Data System (ADS)
Cojoc, Dan; Alexandrescu, Adrian
1999-11-01
The aim of this paper is to compare genetic algorithms against direct point oriented coding in the design of binary phase Fourier holograms, computer generated. These are used as fan-out elements for free space optical interconnection. Genetic algorithms are optimization methods which model the natural process of genetic evolution. The configuration of the hologram is encoded to form a chromosome. To start the optimization, a population of different chromosomes randomly generated is considered. The chromosomes compete, mate and mutate until the best chromosome is obtained according to a cost function. After explaining the operators that are used by genetic algorithms, this paper presents two examples with 32 X 32 genes in a chromosome. The crossover type and the number of mutations are shown to be important factors which influence the convergence of the algorithm. GA is demonstrated to be a useful tool to design namely binary phase holograms of complicate structures.
Survivable algorithms and redundancy management in NASA's distributed computing systems
NASA Technical Reports Server (NTRS)
Malek, Miroslaw
1992-01-01
The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.
Modeling fires in adjacent ship compartments with computational fluid dynamics
Wix, S.D.; Cole, J.K.; Koski, J.A.
1998-05-10
This paper presents an analysis of the thermal effects on radioactive (RAM) transportation packages with a fire in an adjacent compartment. An assumption for this analysis is that the adjacent hold fire is some sort of engine room fire. Computational fluid dynamics (CFD) analysis tools were used to perform the analysis in order to include convective heat transfer effects. The analysis results were compared to experimental data gathered in a series of tests on tile US Coast Guard ship Mayo Lykes located at Mobile, Alabama.
Continuing Validation of Computational Fluid Dynamics for Supersonic Retropropulsion
NASA Technical Reports Server (NTRS)
Schauerhamer, Daniel Guy; Trumble, Kerry A.; Kleb, Bil; Carlson, Jan-Renee; Edquist, Karl T.
2011-01-01
A large step in the validation of Computational Fluid Dynamics (CFD) for Supersonic Retropropulsion (SRP) is shown through the comparison of three Navier-Stokes solvers (DPLR, FUN3D, and OVERFLOW) and wind tunnel test results. The test was designed specifically for CFD validation and was conducted in the Langley supersonic 4 x4 Unitary Plan Wind Tunnel and includes variations in the number of nozzles, Mach and Reynolds numbers, thrust coefficient, and angles of orientation. Code-to-code and code-to-test comparisons are encouraging and possible error sources are discussed.
New Challenges in Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The development of visualization systems for analyzing computational fluid dynamics data has been driven by increasing size and complexity of the data. New extensions to the system domain into analysis of data from multiple sources, parameter space studies, and multidisciplinary studies in support of integrated aeronautical design systems provide new g challenges for the visualization system developer. Recent work at NASA Ames Research Center in visualization systems, automatic flow feature detection, unsteady flow visualization techniques, and a new area, data exploitation, will be discussed in the context of NASA information technology initiatives.
NASA Technical Reports Server (NTRS)
Thorp, Scott A.
1992-01-01
This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.
Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion
NASA Technical Reports Server (NTRS)
Camarena, Ernesto; Vu, Bruce T.
2011-01-01
The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.
Incorporating geometrically complex vegetation in a computational fluid dynamic framework
NASA Astrophysics Data System (ADS)
Boothroyd, Richard; Hardy, Richard; Warburton, Jeff; Rosser, Nick
2015-04-01
Vegetation is known to have a significant influence on the hydraulic, geomorphological, and ecological functioning of river systems. Vegetation acts as a blockage to flow, thereby causing additional flow resistance and influencing flow dynamics, in particular flow conveyance. These processes need to be incorporated into flood models to improve predictions used in river management. However, the current practice in representing vegetation in hydraulic models is either through roughness parameterisation or process understanding derived experimentally from flow through highly simplified configurations of fixed, rigid cylinders. It is suggested that such simplifications inadequately describe the geometric complexity that characterises vegetation, and therefore the modelled flow dynamics may be oversimplified. This paper addresses this issue by using an approach combining field and numerical modelling techniques. Terrestrial Laser Scanning (TLS) with waveform processing has been applied to collect a sub-mm, 3-dimensional representation of Prunus laurocerasus, an invasive species to the UK that has been increasingly recorded in riparian zones. Multiple scan perspectives produce a highly detailed point cloud (>5,000,000 individual data points) which is reduced in post processing using an octree-based voxelisation technique. The method retains the geometric complexity of the vegetation by subdividing the point cloud into 0.01 m3 cubic voxels. The voxelised representation is subsequently read into a computational fluid dynamic (CFD) model using a Mass Flux Scaling Algorithm, allowing the vegetation to be directly represented in the modelling framework. Results demonstrate the development of a complex flow field around the vegetation. The downstream velocity profile is characterised by two distinct inflection points. A high velocity zone in the near-bed (plant-stem) region is apparent due to the lack of significant near-bed foliage. Above this, a zone of reduced velocity is
Computational fluid dynamics (CFD) and its potential for nuclear applications
Weber, D.P.; Wei, T.Y.C.; Rock, D.T.; Rizwan-Uddin; Brewster, R.A.; Jonnavithula, S.
1999-11-01
The purpose of this paper is to examine the use of these advanced models, methods and computing environments for nuclear applications to determine if the industry can expect to derive the same benefit as other industries, such as the automotive and the aerospace industries. As an example, the authors will examine the use of modern computational fluid dynamics (CFD) capability for subchannel analysis, which is an important part of the analysis technology used by utilities to ensure safe and economical design and operation of reactors. In the current deregulated environment, it is possible that by use of these enhanced techniques, the thermal and electrical output of current reactors may be increased without any increase in cost and at no compromise in safety.
Parallelization of implicit finite difference schemes in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel
1990-01-01
Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.
Analysis of nuclear thermal propulsion systems using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Stubbs, Robert M.; Kim, Suk C.; Papp, John L.
1993-01-01
Computational fluid dynamics (CFD) analyses of nuclear rockets with relatively low chamber pressures were carried out to assess the merits of using such low pressures to take advantage of hydrogen dissociation and recombination. The computations, using a Navier-Stokes code with chemical kinetics, describe the flow field in detail, including gas dynamics, thermodynamic and chemical properties, and provide global performance quantities such as specific impulse and thrust. Parametric studies were performed varying chamber temperature, chamber pressure and nozzle size. Chamber temperature was varied between 2700 K and 3600 K, and chamber pressure between 0.1 atm. and 10 atm. Performance advantages associated with lower chamber pressures are shown to occur at the higher chamber temperatures. Viscous losses are greater at lower chamber pressures and can be decreased in larger nozzles where the boundary layer is a smaller fraction of the flow field.
Computational Fluid Dynamics Analysis of Canadian Supercritical Water Reactor (SCWR)
NASA Astrophysics Data System (ADS)
Movassat, Mohammad; Bailey, Joanne; Yetisir, Metin
2015-11-01
A Computational Fluid Dynamics (CFD) simulation was performed on the proposed design for the Canadian SuperCritical Water Reactor (SCWR). The proposed Canadian SCWR is a 1200 MW(e) supercritical light-water cooled nuclear reactor with pressurized fuel channels. The reactor concept uses an inlet plenum that all fuel channels are attached to and an outlet header nested inside the inlet plenum. The coolant enters the inlet plenum at 350 C and exits the outlet header at 625 C. The operating pressure is approximately 26 MPa. The high pressure and high temperature outlet conditions result in a higher electric conversion efficiency as compared to existing light water reactors. In this work, CFD simulations were performed to model fluid flow and heat transfer in the inlet plenum, outlet header, and various parts of the fuel assembly. The ANSYS Fluent solver was used for simulations. Results showed that mass flow rate distribution in fuel channels varies radially and the inner channels achieve higher outlet temperatures. At the outlet header, zones with rotational flow were formed as the fluid from 336 fuel channels merged. Results also suggested that insulation of the outlet header should be considered to reduce the thermal stresses caused by the large temperature gradients.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
The algorithmic level is the bridge between computation and brain.
Love, Bradley C
2015-04-01
Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's (1982) three levels of analysis (implementation, algorithmic, and computational) and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top-down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view.
The role of computational fluid dynamics (CFD) in hair science.
Spicka, Peter; Grald, Eric
2004-01-01
The use of computational fluid dynamics (CFD) as a virtual prototyping tool is widespread in the consumer packaged goods industry. CFD refers to the calculation on a computer of the velocity, pressure, and temperature and chemical species concentrations within a flowing liquid or gas. Because the performance of manufacturing equipment and product designs can be simulated on the computer, the benefit of using CFD is significant time and cost savings when compared to traditional physical testing methods. CFD has been used to design, scale-up and troubleshoot mixing tanks, spray dryers, heat exchangers and other process equipment. Recently, computer models of the capillary wicking process inside fibrous structures have been added to CFD software. These models have been used to gain a better understanding of the absorbent performance of diapers and feminine protection products. The same models can also be used to represent the movement of shampoo, conditioner, colorants and other products through the hair and scalp. In this paper, we provide an introduction to CFD and show some examples of its application to the manufacture of consumer products. We also provide sonic examples to show the potential of CFD for understanding the performance of products applied to the hair and scalp.
Computational Discovery of Materials Using the Firefly Algorithm
NASA Astrophysics Data System (ADS)
Avendaño-Franco, Guillermo; Romero, Aldo
Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.
Shape optimization of the diffuser blade of an axial blood pump by computational fluid dynamics.
Zhu, Lailai; Zhang, Xiwen; Yao, Zhaohui
2010-03-01
Computational fluid dynamics (CFD) has been a viable and effective way to predict hydraulic performance, flow field, and shear stress distribution within a blood pump. We developed an axial blood pump with CFD and carried out a CFD-based shape optimization of the diffuser blade to enhance pressure output and diminish backflow in the impeller-diffuser connecting region at a fixed design point. Our optimization combined a computer-aided design package, a mesh generator, and a CFD solver in an automation environment with process integration and optimization software. A genetic optimization algorithm was employed to find the pareto-optimal designs from which we could make trade-off decisions. Finally, a set of representative designs was analyzed and compared on the basis of the energy equation. The role of the inlet angle of the diffuser blade was analyzed, accompanied by its relationship with pressure output and backflow in the impeller-diffuser connecting region.
Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus
NASA Technical Reports Server (NTRS)
Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle
1999-01-01
This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.
Computational fluid dynamics of developing avian outflow tract heart valves.
Bharadwaj, Koonal N; Spitz, Cassie; Shekhar, Akshay; Yalcin, Huseyin C; Butcher, Jonathan T
2012-10-01
Hemodynamic forces play an important role in sculpting the embryonic heart and its valves. Alteration of blood flow patterns through the hearts of embryonic animal models lead to malformations that resemble some clinical congenital heart defects, but the precise mechanisms are poorly understood. Quantitative understanding of the local fluid forces acting in the heart has been elusive because of the extremely small and rapidly changing anatomy. In this study, we combine multiple imaging modalities with computational simulation to rigorously quantify the hemodynamic environment within the developing outflow tract (OFT) and its eventual aortic and pulmonary valves. In vivo Doppler ultrasound generated velocity profiles were applied to Micro-Computed Tomography generated 3D OFT lumen geometries from Hamburger-Hamilton (HH) stage 16-30 chick embryos. Computational fluid dynamics simulation initial conditions were iterated until local flow profiles converged with in vivo Doppler flow measurements. Results suggested that flow in the early tubular OFT (HH16 and HH23) was best approximated by Poiseuille flow, while later embryonic OFT septation (HH27, HH30) was mimicked by plug flow conditions. Peak wall shear stress (WSS) values increased from 18.16 dynes/cm(2) at HH16 to 671.24 dynes/cm(2) at HH30. Spatiotemporally averaged WSS values also showed a monotonic increase from 3.03 dynes/cm(2) at HH16 to 136.50 dynes/cm(2) at HH30. Simulated velocity streamlines in the early heart suggest a lack of mixing, which differed from classical ink injections. Changes in local flow patterns preceded and correlated with key morphogenetic events such as OFT septation and valve formation. This novel method to quantify local dynamic hemodynamics parameters affords insight into sculpting role of blood flow in the embryonic heart and provides a quantitative baseline dataset for future research.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
An efficient algorithm for computing the crossovers in satellite altimetry
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1988-01-01
An efficient algorithm has been devised to compute the crossovers in satellite altimetry. The significance of the crossovers is twofold. First, they are needed to perform the crossover adjustment to remove the orbit error. Secondly, they yield important insight into oceanic variability. Nevertheless, there is no published algorithm to make this very time consuming task easier, which is the goal of this report. The success of the algorithm is predicated on the ability to predict (by analytical means) the crossover coordinates to within 6 km and 1 sec of the true values. Hence, only one interpolation/extrapolation step on the data is needed to derive the crossover coordinates in contrast to the many interpolation/extrapolation operations usually needed to arrive at the same accuracy level if deprived of this information.
State-Estimation Algorithm Based on Computer Vision
NASA Technical Reports Server (NTRS)
Bayard, David; Brugarolas, Paul
2007-01-01
An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.
Computer algorithms in the search for unrelated stem cell donors.
Steiner, David
2012-01-01
Hematopoietic stem cell transplantation (HSCT) is a medical procedure in the field of hematology and oncology, most often performed for patients with certain cancers of the blood or bone marrow. A lot of patients have no suitable HLA-matched donor within their family, so physicians must activate a "donor search process" by interacting with national and international donor registries who will search their databases for adult unrelated donors or cord blood units (CBU). Information and communication technologies play a key role in the donor search process in donor registries both nationally and internationaly. One of the major challenges for donor registry computer systems is the development of a reliable search algorithm. This work discusses the top-down design of such algorithms and current practice. Based on our experience with systems used by several stem cell donor registries, we highlight typical pitfalls in the implementation of an algorithm and underlying data structure.
Wu, Binxin
2010-12-01
In this paper, 12 turbulence models for single-phase non-newtonian fluid flow in a pipe are evaluated by comparing the frictional pressure drops obtained from computational fluid dynamics (CFD) with those from three friction factor correlations. The turbulence models studied are (1) three high-Reynolds-number k-ε models, (2) six low-Reynolds-number k-ε models, (3) two k-ω models, and (4) the Reynolds stress model. The simulation results indicate that the Chang-Hsieh-Chen version of the low-Reynolds-number k-ε model performs better than the other models in predicting the frictional pressure drops while the standard k-ω model has an acceptable accuracy and a low computing cost. In the model applications, CFD simulation of mixing in a full-scale anaerobic digester with pumped circulation is performed to propose an improvement in the effective mixing standards recommended by the U.S. EPA based on the effect of rheology on the flow fields. Characterization of the velocity gradient is conducted to quantify the growth or breakage of an assumed floc size. Placement of two discharge nozzles in the digester is analyzed to show that spacing two nozzles 180° apart with each one discharging at an angle of 45° off the wall is the most efficient. Moreover, the similarity rules of geometry and mixing energy are checked for scaling up the digester.
An improved spectral graph partitioning algorithm for mapping parallel computations
Hendrickson, B.; Leland, R.
1992-09-01
Efficient use of a distributed memory parallel computer requires that the computational load be balanced across processors in a way that minimizes interprocessor communication. We present a new domain mapping algorithm that extends recent work in which ideas from spectral graph theory have been applied to this problem. Our generalization of spectral graph bisection involves a novel use of multiple eigenvectors to allow for division of a computation into four or eight parts at each stage of a recursive decomposition. The resulting method is suitable for scientific computations like irregular finite elements or differences performed on hypercube or mesh architecture machines. Experimental results confirm that the new method provides better decompositions arrived at more economically and robustly than with previous spectral methods. We have also improved upon the known spectral lower bound for graph bisection.
Computational fluid dynamics modeling for emergency preparedness & response
Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.
1995-07-01
Computational fluid dynamics (CFD) has played an increasing role in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-coupled, nonlinear equations whose solution requires a significant level of computational power. Until recently, such computer power could be found only in CRAY-class supercomputers. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Atmospheric Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Such models have been the cornerstones of the ARAC operational system for the past ten years. Kamada (1992) provides a review of diagnostic models and their applications to dispersion problems. However, because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows.
A Moving Target Environment for Computer Configurations Using Genetic Algorithms
Crouse, Michael; Fulp, Errin W.
2011-10-31
Moving Target (MT) environments for computer systems provide security through diversity by changing various system properties that are explicitly defined in the computer configuration. Temporal diversity can be achieved by making periodic configuration changes; however in an infrastructure of multiple similarly purposed computers diversity must also be spatial, ensuring multiple computers do not simultaneously share the same configuration and potential vulnerabilities. Given the number of possible changes and their potential interdependencies discovering computer configurations that are secure, functional, and diverse is challenging. This paper describes how a Genetic Algorithm (GA) can be employed to find temporally and spatially diverse secure computer configurations. In the proposed approach a computer configuration is modeled as a chromosome, where an individual configuration setting is a trait or allele. The GA operates by combining multiple chromosomes (configurations) which are tested for feasibility and ranked based on performance which will be measured as resistance to attack. The result of successive iterations of the GA are secure configurations that are diverse due to the crossover and mutation processes. Simulations results will demonstrate this approach can provide at MT environment for a large infrastructure of similarly purposed computers by discovering temporally and spatially diverse secure configurations.
Using Advanced Computer Vision Algorithms on Small Mobile Robots
2006-04-20
Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working...use in real-time. Test results are shown for a variety of environments. KEYWORDS: robotics, computer vision, car /license plate detection, SIFT...when detecting the make and model of automobiles , SIFT can be used to achieve very high detection rates at the expense of a hefty performance cost when
Validation of Magnetic Resonance Thermometry by Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Rydquist, Grant; Owkes, Mark; Verhulst, Claire M.; Benson, Michael J.; Vanpoppel, Bret P.; Burton, Sascha; Eaton, John K.; Elkins, Christopher P.
2016-11-01
Magnetic Resonance Thermometry (MRT) is a new experimental technique that can create fully three-dimensional temperature fields in a noninvasive manner. However, validation is still required to determine the accuracy of measured results. One method of examination is to compare data gathered experimentally to data computed with computational fluid dynamics (CFD). In this study, large-eddy simulations have been performed with the NGA computational platform to generate data for a comparison with previously run MRT experiments. The experimental setup consisted of a heated jet inclined at 30° injected into a larger channel. In the simulations, viscosity and density were scaled according to the local temperature to account for differences in buoyant and viscous forces. A mesh-independent study was performed with 5 mil-, 15 mil- and 45 mil-cell meshes. The program Star-CCM + was used to simulate the complete experimental geometry. This was compared to data generated from NGA. Overall, both programs show good agreement with the experimental data gathered with MRT. With this data, the validity of MRT as a diagnostic tool has been shown and the tool can be used to further our understanding of a range of flows with non-trivial temperature distributions.
Computational Implementation of a Coupled Plasma-Neutral Fluid Model
NASA Astrophysics Data System (ADS)
Vold, E. L.; Najmabadi, F.; Conn, R. W.
1992-12-01
This paper describes the computational transport of coupled plasma-neutral fluids in the edge region of a toroidally symmetric magnetic confinement device, with applications to the tokamak. The model couples neutral density in a diffusion approximation with a set of transport equations for the plasma including density, classical plasma parallel velocity, anomalous cross-field velocity, and ion and electron temperature equations. The plasma potential, gradient electric fields, drift velocity, and net poloidal velocity are computed as dependent quantities under the assumption of ambipolarity. The implementation is flexible to permit extension in the future to a fully coupled set of non-ambipolar momentum equations. The computational method incorporates sonic flow and particle recycling of ions and neutrals at the vessel boundary. A numerically generated orthogonal grid conforms to the poloidal magnetic flux surfaces. Power law differencing based on the SIMPLE relaxation method is modified to accomodate the compressible reactive plasma flow with a "semi-implicit" diffusion method. Residual corrections are applied to obtain a valid convergence to the steady state solution. Results are presented for a representative divertor tokamak in a high recycling regime, showing strongly peaked neutral and plasma densities near the divertor target. Solutions show large poloidal and radial gradients in the plasma density, potential, and temperatures. These findings may help to understand the strong turbulence experimentally observed in the plasma edge region of the tokamak.
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Schallhorn, Paul
1998-01-01
This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.
Computational fluid dynamic modeling of fluidized-bed polymerization reactors
Rokkam, Ram
2012-01-01
Polyethylene is one of the most widely used plastics, and over 60 million tons are produced worldwide every year. Polyethylene is obtained by the catalytic polymerization of ethylene in gas and liquid phase reactors. The gas phase processes are more advantageous, and use fluidized-bed reactors for production of polyethylene. Since they operate so close to the melting point of the polymer, agglomeration is an operational concern in all slurry and gas polymerization processes. Electrostatics and hot spot formation are the main factors that contribute to agglomeration in gas-phase processes. Electrostatic charges in gas phase polymerization fluidized bed reactors are known to influence the bed hydrodynamics, particle elutriation, bubble size, bubble shape etc. Accumulation of electrostatic charges in the fluidized-bed can lead to operational issues. In this work a first-principles electrostatic model is developed and coupled with a multi-fluid computational fluid dynamic (CFD) model to understand the effect of electrostatics on the dynamics of a fluidized-bed. The multi-fluid CFD model for gas-particle flow is based on the kinetic theory of granular flows closures. The electrostatic model is developed based on a fixed, size-dependent charge for each type of particle (catalyst, polymer, polymer fines) phase. The combined CFD model is first verified using simple test cases, validated with experiments and applied to a pilot-scale polymerization fluidized-bed reactor. The CFD model reproduced qualitative trends in particle segregation and entrainment due to electrostatic charges observed in experiments. For the scale up of fluidized bed reactor, filtered models are developed and implemented on pilot scale reactor.
Sort-Mid tasks scheduling algorithm in grid computing.
Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M
2015-11-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.
Review of computational fluid dynamics applications in biotechnology processes.
Sharma, C; Malhotra, D; Rathore, A S
2011-01-01
Computational fluid dynamics (CFD) is well established as a tool of choice for solving problems that involve one or more of the following phenomena: flow of fluids, heat transfer,mass transfer, and chemical reaction. Unit operations that are commonly utilized in biotechnology processes are often complex and as such would greatly benefit from application of CFD. The thirst for deeper process and product understanding that has arisen out of initiatives such as quality by design provides further impetus toward usefulness of CFD for problems that may otherwise require extensive experimentation. Not surprisingly, there has been increasing interest in applying CFD toward a variety of applications in biotechnology processing in the last decade. In this article, we will review applications in the major unit operations involved with processing of biotechnology products. These include fermentation,centrifugation, chromatography, ultrafiltration, microfiltration, and freeze drying. We feel that the future applications of CFD in biotechnology processing will focus on establishing CFD as a tool of choice for providing process understanding that can be then used to guide more efficient and effective experimentation. This article puts special emphasis on the work done in the last 10 years.
Computational fluid dynamics (CFD) studies of a miniaturized dissolution system.
Frenning, G; Ahnfelt, E; Sjögren, E; Lennernäs, H
2017-02-08
Dissolution testing is an important tool that has applications ranging from fundamental studies of drug-release mechanisms to quality control of the final product. The rate of release of the drug from the delivery system is known to be affected by hydrodynamics. In this study we used computational fluid dynamics to simulate and investigate the hydrodynamics in a novel miniaturized dissolution method for parenteral formulations. The dissolution method is based on a rotating disc system and uses a rotating sample reservoir which is separated from the remaining dissolution medium by a nylon screen. Sample reservoirs of two sizes were investigated (SR6 and SR8) and the hydrodynamic studies were performed at rotation rates of 100, 200 and 400rpm. The overall fluid flow was similar for all investigated cases, with a lateral upward spiraling motion and central downward motion in the form of a vortex to and through the screen. The simulations indicated that the exchange of dissolution medium between the sample reservoir and the remaining release medium was rapid for typical screens, for which almost complete mixing would be expected to occur within less than one minute at 400rpm. The local hydrodynamic conditions in the sample reservoirs depended on their size; SR8 appeared to be relatively more affected than SR6 by the resistance to liquid flow resulting from the screen.
Computational Fluid Dynamics of Acoustically Driven Bubble Systems
NASA Astrophysics Data System (ADS)
Glosser, Connor; Lie, Jie; Dault, Daniel; Balasubramaniam, Shanker; Piermarocchi, Carlo
2014-03-01
The development of modalities for precise, targeted drug delivery has become increasingly important in medical care in recent years. Assemblages of microbubbles steered by acoustic pressure fields present one potential vehicle for such delivery. Modeling the collective response of multi-bubble systems to an intense, externally applied ultrasound field requires accurately capturing acoustic interactions between bubbles and the externally applied field, and their effect on the evolution of bubble kinetics. In this work, we present a methodology for multiphysics simulation based on an efficient transient boundary integral equation (TBIE) coupled with molecular dynamics (MD) to compute trajectories of multiple acoustically interacting bubbles in an ideal fluid under pulsed acoustic excitation. For arbitrary configurations of spherical bubbles, the TBIE solver self-consistently models transient surface pressure distributions at bubble-fluid interfaces due to acoustic interactions and relative potential flows induced by bubble motion. Forces derived from the resulting pressure distributions act as driving terms in the MD update at each timestep. The resulting method efficiently and accurately captures individual bubble dynamics for clouds containing up to hundreds of bubbles.
The aerospace plane design challenge: Credible computational fluid dynamics results
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B.
1990-01-01
Computational fluid dynamics (CFD) is necessary in the design processes of all current aerospace plane programs. Single-stage-to-orbit (STTO) aerospace planes with air-breathing supersonic combustion are going to be largely designed by means of CFD. The challenge of the aerospace plane design is to provide credible CFD results to work from, to assess the risk associated with the use of those results, and to certify CFD codes that produce credible results. To establish the credibility of CFD results used in design, the following topics are discussed: CFD validation vis-a-vis measurable fluid dynamics (MFD) validation; responsibility for credibility; credibility requirement; and a guide for establishing credibility. Quantification of CFD uncertainties helps to assess success risk and safety risks, and the development of CFD as a design tool requires code certification. This challenge is managed by designing the designers to use CFD effectively, by ensuring quality control, and by balancing the design process. For designing the designers, the following topics are discussed: how CFD design technology is developed; the reasons Japanese companies, by and large, produce goods of higher quality than the U.S. counterparts; teamwork as a new way of doing business; and how ideas, quality, and teaming can be brought together. Quality control for reducing the loss imparted to the society begins with the quality of the CFD results used in the design process, and balancing the design process means using a judicious balance of CFD and MFD.
Simulation of Tailrace Hydrodynamics Using Computational Fluid Dynamics Models
Cook, Christopher B.; Richmond, Marshall C.
2001-05-01
This report investigates the feasibility of using computational fluid dynamics (CFD) tools to investigate hydrodynamic flow fields surrounding the tailrace zone below large hydraulic structures. Previous and ongoing studies using CFD tools to simulate gradually varied flow with multiple constituents and forebay/intake hydrodynamics have shown that CFD tools can provide valuable information for hydraulic and biological evaluation of fish passage near hydraulic structures. These studies however are incapable of simulating the rapidly varying flow fields that involving breakup of the free-surface, such as those through and below high flow outfalls and spillways. Although the use of CFD tools for these types of flow are still an active area of research, initial applications discussed in this report show that these tools are capable of simulating the primary features of these highly transient flow fields.
Knowledge-based zonal grid generation for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Andrews, Alison E.
1988-01-01
Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.
Personal Computer (PC) based image processing applied to fluid mechanics
NASA Astrophysics Data System (ADS)
Cho, Y.-C.; McLachlan, B. G.
1987-10-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.
Personal Computer (PC) based image processing applied to fluid mechanics
NASA Technical Reports Server (NTRS)
Cho, Y.-C.; Mclachlan, B. G.
1987-01-01
A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.
Modern wing flutter analysis by computational fluid dynamics methods
NASA Technical Reports Server (NTRS)
Cunningham, Herbert J.; Batina, John T.; Bennett, Robert M.
1988-01-01
The application and assessment of the recently developed CAP-TSD transonic small-disturbance code for flutter prediction is described. The CAP-TSD code has been developed for aeroelastic analysis of complete aircraft configurations and was previously applied to the calculation of steady and unsteady pressures with favorable results. Generalized aerodynamic forces and flutter characteristics are calculated and compared with linear theory results and with experimental data for a 45 deg sweptback wing. These results are in good agreement with the experimental flutter data which is the first step toward validating CAP-TSD for general transonic aeroelastic applications. The paper presents these results and comparisons along with general remarks regarding modern wing flutter analysis by computational fluid dynamics methods.
Computational Fluid Dynamics Simulation of Fluidized Bed Polymerization Reactors
Fan, Rong
2006-01-01
Fluidized beds (FB) reactors are widely used in the polymerization industry due to their superior heat- and mass-transfer characteristics. Nevertheless, problems associated with local overheating of polymer particles and excessive agglomeration leading to FB reactors defluidization still persist and limit the range of operating temperatures that can be safely achieved in plant-scale reactors. Many people have been worked on the modeling of FB polymerization reactors, and quite a few models are available in the open literature, such as the well-mixed model developed by McAuley, Talbot, and Harris (1994), the constant bubble size model (Choi and Ray, 1985) and the heterogeneous three phase model (Fernandes and Lona, 2002). Most these research works focus on the kinetic aspects, but from industrial viewpoint, the behavior of FB reactors should be modeled by considering the particle and fluid dynamics in the reactor. Computational fluid dynamics (CFD) is a powerful tool for understanding the effect of fluid dynamics on chemical reactor performance. For single-phase flows, CFD models for turbulent reacting flows are now well understood and routinely applied to investigate complex flows with detailed chemistry. For multiphase flows, the state-of-the-art in CFD models is changing rapidly and it is now possible to predict reasonably well the flow characteristics of gas-solid FB reactors with mono-dispersed, non-cohesive solids. This thesis is organized into seven chapters. In Chapter 2, an overview of fluidized bed polymerization reactors is given, and a simplified two-site kinetic mechanism are discussed. Some basic theories used in our work are given in detail in Chapter 3. First, the governing equations and other constitutive equations for the multi-fluid model are summarized, and the kinetic theory for describing the solid stress tensor is discussed. The detailed derivation of DQMOM for the population balance equation is given as the second section. In this section
NASA Astrophysics Data System (ADS)
Sijoy, C. D.; Chaturvedi, Shashank
2010-05-01
Volume-of-fluid (VOF) interface reconstruction methods are used to define material interfaces to separate different materials in a mixed cell. These material interfaces are then used to evaluate transport flux at each cell edges in multi-material hydrodynamic calculations. Most of the VOF interface reconstruction methods and volume transport schemes rely on an accurate material order unique to each computational cell. Similarly, to achieve overshoot-free volume fractions, a non-intersecting interface reconstruction procedure has to be performed with the help of a 'material-order list' determined prior to interface reconstruction. It is, however, the least explored area of VOF technique especially for 'onion-skin' or 'layered' model. Also, important technical details how to prevent intersection among different material interfaces are missing in many literature. Here, we present an efficient VOF interface tracking algorithm along with modified 'material order' methods and different interface reconstruction methods. The relative accuracy of different methods are evaluated for sample problems. Finally, a convergence study with respect to mesh-size is performed.
Banks, J.W. Henshaw, W.D. Kapila, A.K. Schwendeman, D.W.
2016-01-15
We describe an added-mass partitioned (AMP) algorithm for solving fluid–structure interaction (FSI) problems involving inviscid compressible fluids interacting with nonlinear solids that undergo large rotations and displacements. The computational approach is a mixed Eulerian–Lagrangian scheme that makes use of deforming composite grids (DCG) to treat large changes in the geometry in an accurate, flexible, and robust manner. The current work extends the AMP algorithm developed in Banks et al. [1] for linearly elasticity to the case of nonlinear solids. To ensure stability for the case of light solids, the new AMP algorithm embeds an approximate solution of a nonlinear fluid–solid Riemann (FSR) problem into the interface treatment. The solution to the FSR problem is derived and shown to be of a similar form to that derived for linear solids: the state on the interface being fundamentally an impedance-weighted average of the fluid and solid states. Numerical simulations demonstrate that the AMP algorithm is stable even for light solids when added-mass effects are large. The accuracy and stability of the AMP scheme is verified by comparison to an exact solution using the method of analytical solutions and to a semi-analytical solution that is obtained for a rotating solid disk immersed in a fluid. The scheme is applied to the simulation of a planar shock impacting a light elliptical-shaped solid, and comparisons are made between solutions of the FSI problem for a neo-Hookean solid, a linearly elastic solid, and a rigid solid. The ability of the approach to handle large deformations is demonstrated for a problem of a high-speed flow past a light, thin, and flexible solid beam.
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai
1998-01-01
This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.
NASA Astrophysics Data System (ADS)
Peladeau-Pigeon, M.; Coolens, C.
2013-09-01
Dynamic contrast-enhanced computed tomography (DCE-CT) is an imaging tool that aids in evaluating functional characteristics of tissue at different stages of disease management: diagnostic, radiation treatment planning, treatment effectiveness, and monitoring. Clinical validation of DCE-derived perfusion parameters remains an outstanding problem to address prior to perfusion imaging becoming a widespread standard as a non-invasive quantitative measurement tool. One approach to this validation process has been the development of quality assurance phantoms in order to facilitate controlled perfusion ex vivo. However, most of these systems fail to establish and accurately replicate physiologically relevant capillary permeability and exchange performance. The current work presents the first step in the development of a prospective suite of physics-based perfusion simulations based on coupled fluid flow and particle transport phenomena with the goal of enhancing the understanding of clinical contrast agent kinetics. Existing knowledge about a controllable, two-compartmental fluid exchange phantom was used to validate the computational fluid dynamics (CFD) simulation model presented herein. The sensitivity of CFD-derived contrast uptake curves to contrast injection parameters, including injection duration and flow rate, were quantified and found to be within 10% accuracy. The CFD model was employed to evaluate two commonly used clinical kinetic algorithms used to derive perfusion parameters: Fick's principle and the modified Tofts model. Neither kinetic model was able to capture the true transport phenomena it aimed to represent but if the overall contrast concentration after injection remained identical, then successive DCE-CT evaluations could be compared and could indeed reflect differences in regional tissue flow. This study sets the groundwork for future explorations in phantom development and pharmaco-kinetic modelling, as well as the development of novel contrast
Direct Fourier Inversion Reconstruction Algorithm for Computed Laminography.
Voropaev, Alexey; Myagotin, Anton; Helfen, Lukas; Baumbach, Tilo
2016-05-01
Synchrotron radiation computed laminography (CL) was developed to complement the conventional computed tomography as a non-destructive 3D imaging method for the inspection of flat thin objects. Recent progress in hardware at synchrotron sources allows one to record internal evolution of specimens at the micrometer scale and sub-second range but also requires increased reconstruction speed to follow structural changes online. A 3D image of the sample interior is usually reconstructed by the well-established filtered backprojection (FBP) approach. Despite of a great success in the reduction of reconstruction time via parallel computations, the FBP algorithm still remains a time-consuming procedure. A promising way to significantly shorten computation time is to directly perform backprojection in frequency domain (a direct Fourier inversion approach). The corresponding algorithms are rarely considered in the literature because of a poor performance or inferior reconstruction quality resulted from inaccurate interpolation in Fourier domain. In this paper, we derive a Fourier-based reconstruction equation designed for the CL scanning geometry. Furthermore, we outline the translation of the continuous solution to a discrete version, which utilizes 3D sinc interpolation. A projection resampling technique allowing for the reduction of the expensive interpolation to its 1D version is proposed. A series of numerical experiments confirms that the resulting image quality is well comparable with the FBP approach while reconstruction time is drastically reduced.
An efficient parallel algorithm for accelerating computational protein design
Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang
2014-01-01
Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991
A Simple Physical Optics Algorithm Perfect for Parallel Computing
NASA Technical Reports Server (NTRS)
Imbriale, W. A.; Cwik, T.
1993-01-01
One of the simplest reflector antenna computer programs is based upon a discrete approximation of the radiation integral. This calculation replaces the actual reflector surface with a triangular facet representation so that the reflector resembles a geodesic dome. The Physical Optics (PO) current is assumed to be constant in magnitude and phase over each facet so the radiation integral is reduced to a simple summation. This program has proven to be surprisingly robust and useful for the analysis of arbitrary reflectors, particularly when the near-field is desired and surface derivatives are not known. Because of its simplicity, the algorithm has proven to be extremely easy to adapt to the parallel computing architecture of a modest number of large-grain computing elements such as are used in the Intel iPSC and Touchstone Delta parallel machines.
Experimental methodology for computational fluid dynamics code validation
Aeschliman, D.P.; Oberkampf, W.L.
1997-09-01
Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.
Benchmarking computational fluid dynamics models for lava flow simulation
NASA Astrophysics Data System (ADS)
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi
2016-04-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.
High-order computational fluid dynamics tools for aircraft design
Wang, Z. J.
2014-01-01
Most forecasts predict an annual airline traffic growth rate between 4.5 and 5% in the foreseeable future. To sustain that growth, the environmental impact of aircraft cannot be ignored. Future aircraft must have much better fuel economy, dramatically less greenhouse gas emissions and noise, in addition to better performance. Many technical breakthroughs must take place to achieve the aggressive environmental goals set up by governments in North America and Europe. One of these breakthroughs will be physics-based, highly accurate and efficient computational fluid dynamics and aeroacoustics tools capable of predicting complex flows over the entire flight envelope and through an aircraft engine, and computing aircraft noise. Some of these flows are dominated by unsteady vortices of disparate scales, often highly turbulent, and they call for higher-order methods. As these tools will be integral components of a multi-disciplinary optimization environment, they must be efficient to impact design. Ultimately, the accuracy, efficiency, robustness, scalability and geometric flexibility will determine which methods will be adopted in the design process. This article explores these aspects and identifies pacing items. PMID:25024419
Computer aided lung cancer diagnosis with deep learning algorithms
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Qian, Wei
2016-03-01
Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.
Turbomachinery computational fluid dynamics: asymptotes and paradigm shifts.
Dawes, W N
2007-10-15
This paper reviews the development of computational fluid dynamics (CFD) specifically for turbomachinery simulations and with a particular focus on application to problems with complex geometry. The review is structured by considering this development as a series of paradigm shifts, followed by asymptotes. The original S1-S2 blade-blade-throughflow model is briefly described, followed by the development of two-dimensional then three-dimensional blade-blade analysis. This in turn evolved from inviscid to viscous analysis and then from steady to unsteady flow simulations. This development trajectory led over a surprisingly small number of years to an accepted approach-a 'CFD orthodoxy'. A very important current area of intense interest and activity in turbomachinery simulation is in accounting for real geometry effects, not just in the secondary air and turbine cooling systems but also associated with the primary path. The requirements here are threefold: capturing and representing these geometries in a computer model; making rapid design changes to these complex geometries; and managing the very large associated computational models on PC clusters. Accordingly, the challenges in the application of the current CFD orthodoxy to complex geometries are described in some detail. The main aim of this paper is to argue that the current CFD orthodoxy is on a new asymptote and is not in fact suited for application to complex geometries and that a paradigm shift must be sought. In particular, the new paradigm must be geometry centric and inherently parallel without serial bottlenecks. The main contribution of this paper is to describe such a potential paradigm shift, inspired by the animation industry, based on a fundamental shift in perspective from explicit to implicit geometry and then illustrate this with a number of applications to turbomachinery.
Analysis of sponge zones for computational fluid mechanics
Bodony, Daniel J. . E-mail: bodony@stanford.edu
2006-03-01
The use of sponge regions, or sponge zones, which add the forcing term -{sigma}(q - q {sub ref}) to the right-hand-side of the governing equations in computational fluid mechanics as an ad hoc boundary treatment is widespread. They are used to absorb and minimize reflections from computational boundaries and as forcing sponges to introduce prescribed disturbances into a calculation. A less common usage is as a means of extending a calculation from a smaller domain into a larger one, such as in computing the far-field sound generated in a localized region. By analogy to the penalty method of finite elements, the method is placed on a solid foundation, complete with estimates of convergence. The analysis generalizes the work of Israeli and Orszag [M. Israeli, S.A. Orszag, Approximation of radiation boundary conditions, J. Comp. Phys. 41 (1981) 115-135] and confirms their findings when applied as a special case to one-dimensional wave propagation in an absorbing sponge. It is found that the rate of convergence of the actual solution to the target solution, with an appropriate norm, is inversely proportional to the sponge strength. A detailed analysis for acoustic wave propagation in one-dimension verifies the convergence rate given by the general theory. The exponential point-wise convergence derived by Israeli and Orszag in the high-frequency limit is recovered and found to hold over all frequencies. A weakly nonlinear analysis of the method when applied to Burgers' equation shows similar convergence properties. Three numerical examples are given to confirm the analysis: the acoustic extension of a two-dimensional time-harmonic point source, the acoustic extension of a three-dimensional initial-value problem of a sound pulse, and the introduction of unstable eigenmodes from linear stability theory into a two-dimensional shear layer.
Norton, Tomás; Tiwari, Brijesh; Sun, Da Wen
2013-01-01
The design of thermal processes in the food industry has undergone great developments in the last two decades due to the availability of cheap computer power alongside advanced modelling techniques such as computational fluid dynamics (CFD). CFD uses numerical algorithms to solve the non-linear partial differential equations of fluid mechanics and heat transfer so that the complex mechanisms that govern many food-processing systems can be resolved. In thermal processing applications, CFD can be used to build three-dimensional models that are both spatially and temporally representative of a physical system to produce solutions with high levels of physical realism without the heavy costs associated with experimental analyses. Therefore, CFD is playing an ever growing role in the development of optimization of conventional as well as the development of new thermal processes in the food industry. This paper discusses the fundamental aspects involved in developing CFD solutions and forms a state-of-the-art review on various CFD applications in conventional as well as novel thermal processes. The challenges facing CFD modellers of thermal processes are also discussed. From this review it is evident that present-day CFD software, with its rich tapestries of mathematical physics, numerical methods and visualization techniques, is currently recognized as a formidable and pervasive technology which can permit comprehensive analyses of thermal processing.
Analysis of Drafting Effects in Swimming Using Computational Fluid Dynamics
Silva, António José; Rouboa, Abel; Moreira, António; Reis, Victor Machado; Alves, Francisco; Vilas-Boas, João Paulo; Marinho, Daniel Almeida
2008-01-01
The purpose of this study was to determine the effect of drafting distance on the drag coefficient in swimming. A k-epsilon turbulent model was implemented in the commercial code Fluent® and applied to the fluid flow around two swimmers in a drafting situation. Numerical simulations were conducted for various distances between swimmers (0.5-8.0 m) and swimming velocities (1.6-2.0 m.s-1). Drag coefficient (Cd) was computed for each one of the distances and velocities. We found that the drag coefficient of the leading swimmer decreased as the flow velocity increased. The relative drag coefficient of the back swimmer was lower (about 56% of the leading swimmer) for the smallest inter-swimmer distance (0.5 m). This value increased progressively until the distance between swimmers reached 6.0 m, where the relative drag coefficient of the back swimmer was about 84% of the leading swimmer. The results indicated that the Cd of the back swimmer was equal to that of the leading swimmer at distances ranging from 6.45 to 8. 90 m. We conclude that these distances allow the swimmers to be in the same hydrodynamic conditions during training and competitions. Key pointsThe drag coefficient of the leading swimmer decreased as the flow velocity increased.The relative drag coefficient of the back swimmer was least (about 56% of the leading swimmer) for the smallest inter-swimmer distance (0.5 m).The drag coefficient values of both swimmers in drafting were equal to distances ranging between 6.45 m and 8.90 m, considering the different flow velocities.The numerical simulation techniques could be a good approach to enable the analysis of the fluid forces around objects in water, as it happens in swimming. PMID:24150135
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation
NASA Astrophysics Data System (ADS)
Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu
2015-09-01
We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.
A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS
Fine-scale Computational Fluid Dynamics (CFD) simulation of pollutant concentrations within roadway and building microenvironments is feasible using high performance computing. Unlike currently used regulatory air quality models, fine-scale CFD simulations are able to account rig...
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
Evaluation of Aircraft Platforms for SOFIA by Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Klotz, S. P.; Srinivasan, G. R.; VanDalsem, William (Technical Monitor)
1995-01-01
The selection of an airborne platform for the Stratospheric Observatory for Infrared Astronomy (SOFIA) is based not only on economic cost, but technical criteria, as well. Technical issues include aircraft fatigue, resonant characteristics of the cavity-port shear layer, aircraft stability, the drag penalty of the open telescope bay, and telescope performance. Recently, two versions of the Boeing 747 aircraft, viz., the -SP and -200 configurations, were evaluated by computational fluid dynamics (CFD) for their suitability as SOFIA platforms. In each configuration the telescope was mounted behind the wings in an open bay with nearly circular aperture. The geometry of the cavity, cavity aperture, and telescope was identical in both platforms. The aperture was located on the port side of the aircraft and the elevation angle of the telescope, measured with respect to the vertical axis, was 500. The unsteady, viscous, three-dimensional, aerodynamic and acoustic flow fields in the vicinity of SOFIA were simulated by an implicit, finite-difference Navier-Stokes flow solver (OVERFLOW) on a Chimera, overset grid system. The computational domain was discretized by structured grids. Computations were performed at wind-tunnel and flight Reynolds numbers corresponding to one free-stream flow condition (M = 0.85, angle of attack alpha = 2.50, and sideslip angle beta = 0 degrees). The computational domains consisted of twenty-nine(29) overset grids in the wind-tunnel simulations and forty-five(45) grids in the simulations run at cruise flight conditions. The maximum number of grid points in the simulations was approximately 4 x 10(exp 6). Issues considered in the evaluation study included analysis of the unsteady flow field in the cavity, the influence of the cavity on the flow across empennage surfaces, the drag penalty caused by the open telescope bay, and the noise radiating from cavity surfaces and the cavity-port shear layer. Wind-tunnel data were also available to compare
Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M
2007-10-11
Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network.
Simulating the nasal cycle with computational fluid dynamics
Patel, Ruchin G.; Garcia, Guilherme J. M.; Frank-Ito, Dennis O.; Kimbell, Julia S.; Rhee, John S.
2015-01-01
Objectives (1) Develop a method to account for the confounding effect of the nasal cycle when comparing pre- and post-surgery objective measures of nasal patency. (2) Illustrate this method by reporting objective measures derived from computational fluid dynamics (CFD) models spanning the full range of mucosal engorgement associated with the nasal cycle in two subjects. Study Design Retrospective Setting Academic tertiary medical center. Subjects and Methods A cohort of 24 nasal airway obstruction patients was reviewed to select the two patients with the greatest reciprocal change in mucosal engorgement between pre- and post-surgery computed tomography (CT) scans. Three-dimensional anatomic models were created based on the pre- and post-operative CT scans. Nasal cycling models were also created by gradually changing the thickness of the inferior turbinate, middle turbinate, and septal swell body. CFD was used to simulate airflow and to calculate nasal resistance and average heat flux. Results Before accounting for the nasal cycle, Patient A appeared to have a paradoxical worsening nasal obstruction in the right cavity postoperatively. After accounting for the nasal cycle, Patient A had small improvements in objective measures postoperatively. The magnitude of the surgical effect also differed in Patient B after accounting for the nasal cycle. Conclusion By simulating the nasal cycle and comparing models in similar congestive states, surgical changes in nasal patency can be distinguished from physiological changes associated with the nasal cycle. This ability can lead to more precise comparisons of pre and post-surgery objective measures and potentially more accurate virtual surgery planning. PMID:25450411
Computational fluid dynamic design of rocket engine pump components
NASA Technical Reports Server (NTRS)
Chen, Wei-Chung; Prueger, George H.; Chan, Daniel C.; Eastland, Anthony H.
1992-01-01
Integration of computational fluid dynamics (CFD) for design and analysis of turbomachinery components is needed as the requirements of pump performance and reliability become more stringent for the new generation of rocket engine. A fast grid generator, designed specially for centrifugal pump impeller, which allows a turbomachinery designer to use CFD to optimize the component design will be presented. The CFD grid is directly generated from the impeller blade G-H blade coordinates. The grid points are first generated on the meridional plane with the desired clustering near the end walls. This is followed by the marching of grid points from the pressure side of one blade to the suction side of a neighboring blade. This fast grid generator has been used to optimize the consortium pump impeller design. A grid dependency study has been conducted for the consortium pump impeller. Two different grid sizes, one with 10,000 grid points and one with 80,000 grid points were used for the grid dependency study. The effects of grid resolution on the turnaround time, including the grid generation and completion of the CFD analysis, is discussed. The impeller overall mass average performance is compared for different designs. Optimum design is achieved through systematic change of the design parameters. In conclusion, it is demonstrated that CFD can be effectively used not only for flow analysis but also for design and optimization of turbomachinery components.
Unsteady computational fluid dynamics in front crawl swimming.
Samson, Mathias; Bernard, Anthony; Monnet, Tony; Lacouture, Patrick; David, Laurent
2017-03-23
The development of codes and power calculations currently allows the simulation of increasingly complex flows, especially in the turbulent regime. Swimming research should benefit from these technological advances to try to better understand the dynamic mechanisms involved in swimming. An unsteady Computational Fluid Dynamics (CFD) study is conducted in crawl, in order to analyse the propulsive forces generated by the hand and forearm. The k-ω SST turbulence model and an overset grid method have been used. The main objectives are to analyse the evolution of the hand-forearm propulsive forces and to explain this relative to the arm kinematics parameters. In order to validate our simulation model, the calculated forces and pressures were compared with several other experimental and numerical studies. A good agreement is found between our results and those of other studies. The hand is the segment that generates the most propulsive forces during the aquatic stroke. As the pressure component is the main source of force, the orientation of the hand-forearm in the absolute coordinate system is an important kinematic parameter in the swimming performance. The propulsive forces are biggest when the angles of attack are high. CFD appears as a very valuable tool to better analyze the mechanisms of swimming performance and offers some promising developments, especially for optimizing the performance from a parametric study.
Computational fluid dynamics for turbomachinery internal air systems.
Chew, John W; Hills, Nicholas J
2007-10-15
Considerable progress in development and application of computational fluid dynamics (CFD) for aeroengine internal flow systems has been made in recent years. CFD is regularly used in industry for assessment of air systems, and the performance of CFD for basic axisymmetric rotor/rotor and stator/rotor disc cavities with radial throughflow is largely understood and documented. Incorporation of three-dimensional geometrical features and calculation of unsteady flows are becoming commonplace. Automation of CFD, coupling with thermal models of the solid components, and extension of CFD models to include both air system and main gas path flows are current areas of development. CFD is also being used as a research tool to investigate a number of flow phenomena that are not yet fully understood. These include buoyancy-affected flows in rotating cavities, rim seal flows and mixed air/oil flows. Large eddy simulation has shown considerable promise for the buoyancy-driven flows and its use for air system flows is expected to expand in the future.
Design of airborne wind turbine and computational fluid dynamics analysis
NASA Astrophysics Data System (ADS)
Anbreen, Faiqa
Wind energy is a promising alternative to the depleting non-renewable sources. The height of the wind turbines becomes a constraint to their efficiency. Airborne wind turbine can reach much higher altitudes and produce higher power due to high wind velocity and energy density. The focus of this thesis is to design a shrouded airborne wind turbine, capable to generate 70 kW to propel a leisure boat with a capacity of 8-10 passengers. The idea of designing an airborne turbine is to take the advantage of higher velocities in the atmosphere. The Solidworks model has been analyzed numerically using Computational Fluid Dynamics (CFD) software StarCCM+. The Unsteady Reynolds Averaged Navier Stokes Simulation (URANS) with K-epsilon turbulence model has been selected, to study the physical properties of the flow, with emphasis on the performance of the turbine and the increase in air velocity at the throat. The analysis has been done using two ambient velocities of 12 m/s and 6 m/s. At 12 m/s inlet velocity, the velocity of air at the turbine has been recorded as 16 m/s. The power generated by the turbine is 61 kW. At inlet velocity of 6 m/s, the velocity of air at turbine increased to 10 m/s. The power generated by turbine is 25 kW.
Rethinking hospital general ward ventilation design using computational fluid dynamics.
Yam, R; Yuen, P L; Yung, R; Choy, T
2011-01-01
Indoor ventilation with good air quality control minimises the spread of airborne respiratory and other infections in hospitals. This article considers the role of ventilation in preventing and controlling infection in hospital general wards and identifies a simple and cost-effective ventilation design capable of reducing the chances of cross-infection. Computational fluid dynamic (CFD) analysis is used to simulate and compare the removal of microbes using a number of different ventilation systems. Instead of the conventional corridor air return arrangement used in most general wards, air return is rearranged so that ventilation is controlled from inside the ward cubicle. In addition to boosting the air ventilation rate, the CFD results reveal that ventilation performance and the removal of microbes can be significantly improved. These improvements are capable of matching the standards maintained in a properly constructed isolation room, though at much lower cost. It is recommended that the newly identified ventilation parameters be widely adopted in the design of new hospital general wards to minimise cross-infection. The proposed ventilation system can also be retrofitted in existing hospital general wards with far less disruption and cost than a full-scale refurbishment.
Improving flow distribution in influent channels using computational fluid dynamics.
Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae
2016-10-01
Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.
Computational Fluid Dynamics Analysis of Flexible Duct Junction Box Design
Beach, R.; Prahl, D.; Lange, R.
2013-12-01
IBACOS explored the relationships between pressure and physical configurations of flexible duct junction boxes by using computational fluid dynamics (CFD) simulations to predict individual box parameters and total system pressure, thereby ensuring improved HVAC performance. Current Air Conditioning Contractors of America (ACCA) guidance (Group 11, Appendix 3, ACCA Manual D, Rutkowski 2009) allows for unconstrained variation in the number of takeoffs, box sizes, and takeoff locations. The only variables currently used in selecting an equivalent length (EL) are velocity of air in the duct and friction rate, given the first takeoff is located at least twice its diameter away from the inlet. This condition does not account for other factors impacting pressure loss across these types of fittings. For each simulation, the IBACOS team converted pressure loss within a box to an EL to compare variation in ACCA Manual D guidance to the simulated variation. IBACOS chose cases to represent flows reasonably correlating to flows typically encountered in the field and analyzed differences in total pressure due to increases in number and location of takeoffs, box dimensions, and velocity of air, and whether an entrance fitting is included. The team also calculated additional balancing losses for all cases due to discrepancies between intended outlet flows and natural flow splits created by the fitting. In certain asymmetrical cases, the balancing losses were significantly higher than symmetrical cases where the natural splits were close to the targets. Thus, IBACOS has shown additional design constraints that can ensure better system performance.
A computational fluid dynamics model of viscous coupling of hairs.
Lewin, Gregory C; Hallam, John
2010-06-01
Arrays of arthropod filiform hairs form highly sensitive mechanoreceptor systems capable of detecting minute air disturbances, and it is unclear to what extent individual hairs interact with one another within sensor arrays. We present a computational fluid dynamics model for one or more hairs, coupled to a rigid-body dynamics model, for simulating both biological (e.g., a cricket cercal hair) and artificial MEMS-based systems. The model is used to investigate hair-hair interaction between pairs of hairs and quantify the extent of so-called viscous coupling. The results show that the extent to which hairs are coupled depends on the mounting properties of the hairs and the frequency at which they are driven. In particular, it is shown that for equal length hairs, viscous coupling is suppressed when they are driven near the natural frequency of the undamped system and the damping coefficient at the base is small. Further, for certain configurations, the motion of a hair can be enhanced by the presence of nearby hairs. The usefulness of the model in designing artificial systems is discussed.
Methodology for computational fluid dynamics code verification/validation
Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.
1995-07-01
The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
Block sparse Cholesky algorithms on advanced uniprocessor computers
Ng, E.G.; Peyton, B.W.
1991-12-01
As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.
Using animation to help students learn computer algorithms.
Catrambone, Richard; Seay, A Fleming
2002-01-01
This paper compares the effects of graphical study aids and animation on the problem-solving performance of students learning computer algorithms. Prior research has found inconsistent effects of animation on learning, and we believe this is partly attributable to animations not being designed to convey key information to learners. We performed an instructional analysis of the to-be-learned algorithms and designed the teaching materials based on that analysis. Participants studied stronger or weaker text-based information about the algorithm, and then some participants additionally studied still frames or an animation. Across 2 studies, learners who studied materials based on the instructional analysis tended to outperform other participants on both near and far transfer tasks. Animation also aided performance, particularly for participants who initially read the weaker text. These results suggest that animation might be added to curricula as a way of improving learning without needing revisions of existing texts and materials. Actual or potential applications of this research include the development of animations for learning complex systems as well as guidelines for determining when animations can aid learning.
Tenth Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion, part 1
NASA Technical Reports Server (NTRS)
Williams, R. W. (Compiler)
1992-01-01
Experimental and computational fluid dynamic activities in rocket propulsion were discussed. The workshop was an open meeting of government, industry, and academia. A broad number of topics were discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.
NASA Technical Reports Server (NTRS)
Williams, R. W. (Compiler)
1996-01-01
The purpose of the workshop was to discuss experimental and computational fluid dynamic activities in rocket propulsion and launch vehicles. The workshop was an open meeting for government, industry, and academia. A broad number of topics were discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.
Algorithm-dependent fault tolerance for distributed computing
P. D. Hough; M. e. Goldsby; E. J. Walsh
2000-02-01
Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.
Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Brandt, Achi; Thomas, James L.; Diskin, Boris
2001-01-01
Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
Survey of Computational Algorithms for MicroRNA Target Prediction
Yue, Dong; Liu, Hui; Huang, Yufei
2009-01-01
MicroRNAs (miRNAs) are 19 to 25 nucleotides non-coding RNAs known to possess important post-transcriptional regulatory functions. Identifying targeting genes that miRNAs regulate are important for understanding their specific biological functions. Usually, miRNAs down-regulate target genes through binding to the complementary sites in the 3' untranslated region (UTR) of the targets. In part, due to the large number of miRNAs and potential targets, an experimental based prediction design would be extremely laborious and economically unfavorable. However, since the bindings of the animal miRNAs are not a perfect one-to-one match with the complementary sites of their targets, it is difficult to predict targets of animal miRNAs by accessing their alignment to the 3' UTRs of potential targets. Consequently, sophisticated computational approaches for miRNA target prediction are being considered as essential methods in miRNA research. We surveyed most of the current computational miRNA target prediction algorithms in this paper. Particularly, we provided a mathematical definition and formulated the problem of target prediction under the framework of statistical classification. Moreover, we summarized the features of miRNA-target pairs in target prediction approaches and discussed these approaches according to two categories, which are the rule-based and the data-driven approaches. The rule-based approach derives the classifier mainly on biological prior knowledge and important observations from biological experiments, whereas the data driven approach builds statistic models using the training data and makes predictions based on the models. Finally, we tested a few different algorithms on a set of experimentally validated true miRNA-target pairs [1] and a set of false miRNA-target pairs, derived from miRNA overexpression experiment [2]. Receiver Operating Characteristic (ROC) curves were drawn to show the performances of these algorithms. PMID:20436875
Algorithmic support for commodity-based parallel computing systems.
Leung, Vitus Joseph; Bender, Michael A.; Bunde, David P.; Phillips, Cynthia Ann
2003-10-01
The Computational Plant or Cplant is a commodity-based distributed-memory supercomputer under development at Sandia National Laboratories. Distributed-memory supercomputers run many parallel programs simultaneously. Users submit their programs to a job queue. When a job is scheduled to run, it is assigned to a set of available processors. Job runtime depends not only on the number of processors but also on the particular set of processors assigned to it. Jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This report introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in Release 2.0 of the Cplant System Software that was phased into the Cplant systems at Sandia by May 2002. Experimental results then demonstrated that the average number of communication hops between the processors allocated to a job strongly correlates with the job's completion time. This report also gives processor-allocation algorithms for minimizing the average number of communication hops between the assigned processors for grid architectures. The associated clustering problem is as follows: Given n points in {Re}d, find k points that minimize their average pairwise L{sub 1} distance. Exact and approximate algorithms are given for these optimization problems. One of these algorithms has been implemented on Cplant and will be included in Cplant System Software, Version 2.1, to be released. In more preliminary work, we suggest improvements to the scheduler separate from the allocator.
Explicit high-order noncanonical symplectic algorithms for ideal two-fluid systems
NASA Astrophysics Data System (ADS)
Xiao, Jianyuan; Qin, Hong; Morrison, Philip J.; Liu, Jian; Yu, Zhi; Zhang, Ruili; He, Yang
2016-11-01
An explicit high-order noncanonical symplectic algorithm for ideal two-fluid systems is developed. The fluid is discretized as particles in the Lagrangian description, while the electromagnetic fields and internal energy are treated as discrete differential form fields on a fixed mesh. With the assistance of Whitney interpolating forms [H. Whitney, Geometric Integration Theory (Princeton University Press, 1957); M. Desbrun et al., Discrete Differential Geometry (Springer, 2008); J. Xiao et al., Phys. Plasmas 22, 112504 (2015)], this scheme preserves the gauge symmetry of the electromagnetic field, and the pressure field is naturally derived from the discrete internal energy. The whole system is solved using the Hamiltonian splitting method discovered by He et al. [Phys. Plasmas 22, 124503 (2015)], which was been successfully adopted in constructing symplectic particle-in-cell schemes [J. Xiao et al., Phys. Plasmas 22, 112504 (2015)]. Because of its structure preserving and explicit nature, this algorithm is especially suitable for large-scale simulations for physics problems that are multi-scale and require long-term fidelity and accuracy. The algorithm is verified via two tests: studies of the dispersion relation of waves in a two-fluid plasma system and the oscillating two-stream instability.
Roos, M W; Wadbro, E; Berggren, M
2013-02-01
Intimal hyperplasia at the distal anastomosis is considered to be an important determinant for arterial and arteriovenous graft failure. The connection between unhealthy hemodynamics and intimal hyperplasia motivates the use of computational fluid dynamics modeling to search for improved graft design. However, studies on the fluid mechanical impact on intimal hyperplasia at the suture line intrusion have previously been scanty. In the present work, we focus on intimal hyperplasia at the suture line and illustrate potential benefits from the introduction of a fluid deflector to shield the suture line from unhealthily high wall shear stress.
NASA Astrophysics Data System (ADS)
Kawamura, Kohei; Ueno, Yosuke; Nakamura, Yoshiaki
In the present study we have developed a numerical method to simulate the flight dynamics of a small flying body with unsteady motion, where both aerodynamics and flight dynamics are fully considered. A key point of this numerical code is to use computational fluid dynamics and computational flight dynamics at the same time, which is referred to as CFD2, or double CFDs, where several new ideas are adopted in the governing equations, the method to make each quantity nondimensional, and the coupling method between aerodynamics and flight dynamics. This numerical code can be applied to simulate the unsteady motion of small vehicles such as micro air vehicles (MAV). As a sample calculation, we take up Taketombo, or a bamboo dragonfly, and its free flight in the air is demonstrated. The eventual aim of this research is to virtually fly an aircraft with arbitrary motion to obtain aerodynamic and flight dynamic data, which cannot be taken in the conventional wind tunnel.
Cloud identification using genetic algorithms and massively parallel computation
NASA Technical Reports Server (NTRS)
Buckles, Bill P.; Petry, Frederick E.
1996-01-01
As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user
Simulation of Propellant Loading System Senior Design Implement in Computer Algorithm
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak
2010-01-01
Propellant loading from the Storage Tank to the External Tank is one of the very important and time consuming pre-launch ground operations for the launch vehicle. The propellant loading system is a complex integrated system involving many physical components such as the storage tank filled with cryogenic fluid at a very low temperature, the long pipe line connecting the storage tank with the external tank, the external tank along with the flare stack, and vent systems for releasing the excess fuel. Some of the very important parameters useful for design purpose are the prediction of pre-chill time, loading time, amount of fuel lost, the maximum pressure rise etc. The physics involved for mathematical modeling is quite complex due to the fact the process is unsteady, there is phase change as some of the fuel changes from liquid to gas state, then conjugate heat transfer in the pipe walls as well as between solid-to-fluid region. The simulation is very tedious and time consuming too. So overall, this is a complex system and the objective of the work is student's involvement and work in the parametric study and optimization of numerical modeling towards the design of such system. The students have to first become familiar and understand the physical process, the related mathematics and the numerical algorithm. The work involves exploring (i) improved algorithm to make the transient simulation computationally effective (reduced CPU time) and (ii) Parametric study to evaluate design parameters by changing the operational conditions
Sæthre, Bjørn Steen; Hoffmann, Alex C; van der Spoel, David
2014-12-09
Some aspects of the use of order parameter fields in molecular dynamics simulations to delimit solid phases containing water, namely ice and hydrate, in both hydrophilic and hydrophobic fluids are examined; this includes the influences of rectangular meshes and of filtering on the quality of these parameters. Three order parameters are studied: the mass density, ρ; an angular tetrahedrality measure, Sg (Chau and Hardwick, Mol. Phys. 1998, 93, 511); and the water-dimer dihedral angle, F4 (Rodger et al. Fluid Phase Equilib. 1996, 116, 326). The parameters are studied to find their ability to distinguish between bulk phases, their consistency in different environments, their noise susceptibility, and their ability to demarcate the interface region. Spatial sampling and filtering are covered in detail, and some temporal features are illustrated by using autocorrelation maps. The parameters are employed to determine the position of interfaces as functions of time and, with the capillary wave fluctuation method (Hoyt et al. Phys. Rev. Lett. 2001, 86, 5530; Math. Comput. Simul. 2010, 80, 1382), to estimate solid-fluid interfacial stiffnesses, with partial success for the hydrophilic/hydrophobic-type interfaces.
Computational Fluid Dynamics Simulation of Dual Bell Nozzle Film Cooling
NASA Technical Reports Server (NTRS)
Braman, Kalen; Garcia, Christian; Ruf, Joseph; Bui, Trong
2015-01-01
Marshall Space Flight Center (MSFC) and Armstrong Flight Research Center (AFRC) are working together to advance the technology readiness level (TRL) of the dual bell nozzle concept. Dual bell nozzles are a form of altitude compensating nozzle that consists of two connecting bell contours. At low altitude the nozzle flows fully in the first, relatively lower area ratio, nozzle. The nozzle flow separates from the wall at the inflection point which joins the two bell contours. This relatively low expansion results in higher nozzle efficiency during the low altitude portion of the launch. As ambient pressure decreases with increasing altitude, the nozzle flow will expand to fill the relatively large area ratio second nozzle. The larger area ratio of the second bell enables higher Isp during the high altitude and vacuum portions of the launch. Despite a long history of theoretical consideration and promise towards improving rocket performance, dual bell nozzles have yet to be developed for practical use and have seen only limited testing. One barrier to use of dual bell nozzles is the lack of control over the nozzle flow transition from the first bell to the second bell during operation. A method that this team is pursuing to enhance the controllability of the nozzle flow transition is manipulation of the film coolant that is injected near the inflection between the two bell contours. Computational fluid dynamics (CFD) analysis is being run to assess the degree of control over nozzle flow transition generated via manipulation of the film injection. A cold flow dual bell nozzle, without film coolant, was tested over a range of simulated altitudes in 2004 in MSFC's nozzle test facility. Both NASA centers have performed a series of simulations of that dual bell to validate their computational models. Those CFD results are compared to the experimental results within this paper. MSFC then proceeded to add film injection to the CFD grid of the dual bell nozzle. A series of
Advanced entry guidance algorithm with landing footprint computation
NASA Astrophysics Data System (ADS)
Leavitt, James Aaron
The design and performance evaluation of an entry guidance algorithm for future space transportation vehicles is presented. The algorithm performs two functions: on-board trajectory planning and trajectory tracking. The planned longitudinal path is followed by tracking drag acceleration, as is done by the Space Shuttle entry guidance. Unlike the Shuttle entry guidance, lateral path curvature is also planned and followed. A new trajectory planning function for the guidance algorithm is developed that is suitable for suborbital entry and that significantly enhances the overall performance of the algorithm for both orbital and suborbital entry. In comparison with the previous trajectory planner, the new planner produces trajectories that are easier to track, especially near the upper and lower drag boundaries and for suborbital entry. The new planner accomplishes this by matching the vehicle's initial flight path angle and bank angle, and by enforcing the full three-degree-of-freedom equations of motion with control derivative limits. Insights gained from trajectory optimization results contribute to the design of the new planner, giving it near-optimal downrange and crossrange capabilities. Planned trajectories and guidance simulation results are presented that demonstrate the improved performance. Based on the new planner, a method is developed for approximating the landing footprint for entry vehicles in near real-time, as would be needed for an on-board flight management system. The boundary of the footprint is constructed from the endpoints of extreme downrange and crossrange trajectories generated by the new trajectory planner. The footprint algorithm inherently possesses many of the qualities of the new planner, including quick execution, the ability to accurately approximate the vehicle's glide capabilities, and applicability to a wide range of entry conditions. Footprints can be generated for orbital and suborbital entry conditions using a pre
Schiller, N K; Franz, T; Weerasekara, N S; Zilla, P; Reddy, B D
2010-12-01
Vascular anastomoses constitute a main factor in poor graft performance due to mismatches in distensibility between the host artery and the graft. This work aims at computational fluid-structure investigations of proximal and distal anastomoses of vein grafts and synthetic grafts. Finite element and finite volume models were developed and coupled with a user-defined algorithm. Emphasis was placed on the simplicity of the coupling algorithm. An artery and vein graft showed a larger dilation mismatch than an artery and synthetic graft. The vein graft distended nearly twice as much as the artery while the synthetic graft displayed only approximately half the arterial dilation. For the vein graft, luminal mismatching was aggravated by development of an anastomotic pseudo-stenosis. While this study focused on end-to-end anastomoses as a vehicle for developing the coupling algorithm, it may serve as useful point of departure for further investigations such as other anastomotic configurations, refined modelling of sutures and fully transient behaviour.
Noise reduction in selective computational ghost imaging using genetic algorithm
NASA Astrophysics Data System (ADS)
Zafari, Mohammad; Ahmadi-Kandjani, Sohrab; Kheradmand, Reza
2017-03-01
Recently, we have presented a selective computational ghost imaging (SCGI) method as an advanced technique for enhancing the security level of the encrypted ghost images. In this paper, we propose a modified method to improve the ghost image quality reconstructed by SCGI technique. The method is based on background subtraction using genetic algorithm (GA) which eliminates background noise and gives background-free ghost images. Analyzing the universal image quality index by using experimental data proves the advantage of this modification method. In particular, the calculated value of the image quality index for modified SCGI over 4225 realization shows an 11 times improvement with respect to SCGI technique. This improvement is 20 times in comparison to conventional CGI technique.
Development of computer algorithms for radiation treatment planning.
Cunningham, J R
1989-06-01
As a result of an analysis of data relating tissue response to radiation absorbed dose the ICRU has recommended a target for accuracy of +/- 5 for dose delivery in radiation therapy. This is a difficult overall objective to achieve because of the many steps that make up a course of radiotherapy. The calculation of absorbed dose is only one of the steps and so to achieve an overall accuracy of better than +/- 5% the accuracy in dose calculation must be better yet. The physics behind the problem is sufficiently complicated so that no exact method of calculation has been found and consequently approximate solutions must be used. The development of computer algorithms for this task involves the search for better and better approximate solutions. To achieve the desired target of accuracy a fairly sophisticated calculation procedure must be used. Only when this is done can we hope to further improve our knowledge of the way in which tissues respond to radiation treatments.
Scalability of preconditioners as a strategy for parallel computation of compressible fluid flow
Hansen, G.A.
1996-05-01
Parallel implementations of a Newton-Krylov-Schwarz algorithm are used to solve a model problem representing low Mach number compressible fluid flow over a backward-facing step. The Mach number is specifically selected to result in a numerically {open_quote}stiff{close_quotes} matrix problem, based on an implicit finite volume discretization of the compressible 2D Navier-Stokes/energy equations using primitive variables. Newton`s method is used to linearize the discrete system, and a preconditioned Krylov projection technique is used to solve the resulting linear system. Domain decomposition enables the development of a global preconditioner via the parallel construction of contributions derived from subdomains. Formation of the global preconditioner is based upon additive and multiplicative Schwarz algorithms, with and without subdomain overlap. The degree of parallelism of this technique is further enhanced with the use of a matrix-free approximation for the Jacobian used in the Krylov technique (in this case, GMRES(k)). Of paramount interest to this study is the implementation and optimization of these techniques on parallel shared-memory hardware, namely the Cray C90 and SGI Challenge architectures. These architectures were chosen as representative and commonly available to researchers interested in the solution of problems of this type. The Newton-Krylov-Schwarz solution technique is increasingly being investigated for computational fluid dynamics (CFD) applications due to the advantages of full coupling of all variables and equations, rapid non-linear convergence, and moderate memory requirements. A parallel version of this method that scales effectively on the above architectures would be extremely attractive to practitioners, resulting in efficient, cost-effective, parallel solutions exhibiting the benefits of the solution technique.
Parallel multiphysics algorithms and software for computational nuclear engineering
NASA Astrophysics Data System (ADS)
Gaston, D.; Hansen, G.; Kadioglu, S.; Knoll, D. A.; Newman, C.; Park, H.; Permann, C.; Taitano, W.
2009-07-01
There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple code coupling or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE-based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.
Paralel Multiphysics Algorithms and Software for Computational Nuclear Engineering
D. Gaston; G. Hansen; S. Kadioglu; D. A. Knoll; C. Newman; H. Park; C. Permann; W. Taitano
2009-08-01
There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple 'code coupling' or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup.
NASA Technical Reports Server (NTRS)
Tezduyar, Tayfun E.
1998-01-01
This is a final report as far as our work at University of Minnesota is concerned. The report describes our research progress and accomplishments in development of high performance computing methods and tools for 3D finite element computation of aerodynamic characteristics and fluid-structure interactions (FSI) arising in airdrop systems, namely ram-air parachutes and round parachutes. This class of simulations involves complex geometries, flexible structural components, deforming fluid domains, and unsteady flow patterns. The key components of our simulation toolkit are a stabilized finite element flow solver, a nonlinear structural dynamics solver, an automatic mesh moving scheme, and an interface between the fluid and structural solvers; all of these have been developed within a parallel message-passing paradigm.
NASA Technical Reports Server (NTRS)
1992-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, fluid mechanics including fluid dynamics, acoustics, and combustion, aerodynamics, and computer science during the period 1 Apr. 1992 - 30 Sep. 1992 is summarized.
A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.
Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W
2009-03-01
We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.
A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics
Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.
2009-01-01
We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007
A theoretical and computational investigation of complex fluids and glasses
NASA Astrophysics Data System (ADS)
Lombardo, Thomas G.
The present dissertation employs molecular simulation and statistical mechanical theories to investigate complex fluids and glasses. The goal is to gain a molecular-level understanding of some of the phenomena present in supercooled liquids, confined water, proteins, glasses, and chiral compounds. The first study examines translational and rotational diffusion in a model of the fragile glass former ortho-terphenyl. The primary objectives are to identify evidence of spatially heterogeneous dynamics and compare how the rates of translation and rotation change relative to one another as the liquid is deeply supercooled. In addition, the breakdown of the Debye model of rotation is analyzed and an alternative formulation of rotational motion is presented. The next study investigates the structure and mechanical properties of glassy water confined between silica-based surfaces with continuously tunable hydrophobicity and hydrophilicity, by computing and analyzing minimum energy and quenched configurations. The maximum sustainable transverse and normal stresses of thin water films are calculated and compared for various confining surfaces. In addition, the mode of mechanical failure is characterized as adhesive or cohesive depending on the strength of interactions between water and the confining surfaces. The third study calculates atomic-level stresses on protein atoms after vitrification. Particular attention is paid to how these stresses change between an equilibrium state at ambient conditions and the quenched or glassy state. The possible effects on protein secondary structure are also analyzed to gauge the ability of theses stresses to disrupt the interactions that stabilize the secondary structure of proteins. The final portion of the dissertation formulates a two-dimensional lattice model to study the equilibrium phase behavior of a ternary mixture composed of two enantiomeric forms of a chiral molecule and a non-chiral liquid solvent. The phase behavior of the
Computational Fluid Dynamic simulations of pipe elbow flow.
Homicz, Gregory Francis
2004-08-01
One problem facing today's nuclear power industry is flow-accelerated corrosion and erosion in pipe elbows. The Korean Atomic Energy Research Institute (KAERI) is performing experiments in their Flow-Accelerated Corrosion (FAC) test loop to better characterize these phenomena, and develop advanced sensor technologies for the condition monitoring of critical elbows on a continuous basis. In parallel with these experiments, Sandia National Laboratories is performing Computational Fluid Dynamic (CFD) simulations of the flow in one elbow of the FAC test loop. The simulations are being performed using the FLUENT commercial software developed and marketed by Fluent, Inc. The model geometry and mesh were created using the GAMBIT software, also from Fluent, Inc. This report documents the results of the simulations that have been made to date; baseline results employing the RNG k-e turbulence model are presented. The predicted value for the diametrical pressure coefficient is in reasonably good agreement with published correlations. Plots of the velocities, pressure field, wall shear stress, and turbulent kinetic energy adjacent to the wall are shown within the elbow section. Somewhat to our surprise, these indicate that the maximum values of both wall shear stress and turbulent kinetic energy occur near the elbow entrance, on the inner radius of the bend. Additional simulations were performed for the same conditions, but with the RNG k-e model replaced by either the standard k-{var_epsilon}, or the realizable k-{var_epsilon} turbulence model. The predictions using the standard k-{var_epsilon} model are quite similar to those obtained in the baseline simulation. However, with the realizable k-{var_epsilon} model, more significant differences are evident. The maximums in both wall shear stress and turbulent kinetic energy now appear on the outer radius, near the elbow exit, and are {approx}11% and 14% greater, respectively, than those predicted in the baseline calculation
Computational fluid dynamics analysis of aerosol deposition in pebble beds
NASA Astrophysics Data System (ADS)
Mkhosi, Margaret Msongi
2007-12-01
The Pebble Bed Modular Reactor is a high temperature gas cooled reactor which uses helium gas as a coolant. The reactor uses spherical graphite pebbles as fuel. The fuel design is inherently resistant to the release of the radioactive material up to high temperatures; therefore, the plant can withstand a broad spectrum of accidents with limited release of radionuclides to the environment. Despite safety features of the concepts, these reactors still contain large inventories of radioactive materials. The transport of most of the radioactive materials in an accident occurs in the form of aerosol particles. In this dissertation, the limits of applicability of existing computational fluid dynamics code FLUENT to the prediction of aerosol transport have been explored. The code was run using the Reynolds Averaged Navier-Stokes turbulence models to determine the effects of different turbulence models on the prediction of aerosol particle deposition. Analyses were performed for up to three unit cells in the orthorhombic configuration. For low flow conditions representing natural circulation driven flow, the laminar flow model was used and the results were compared with existing experimental data for packed beds. The results compares well with experimental data in the low flow regime. For conditions corresponding to normal operating of the reactor, analyses were performed using the standard k-ɛ turbulence model. From the inertial deposition results, a correlation that can be used to estimate the deposition of aerosol particles within pebble beds given inlet flow conditions has been developed. These results were converted into a dimensionless form as a function of a modified Stokes number. Based on results obtained in the laminar regime and for individual pebbles, the correlation developed for the inertial impaction component of deposition is believed to be credible. The form of the correlation developed also allows these results to be applied to pebble beds of different
NASA Technical Reports Server (NTRS)
Williams, R. W. (Compiler)
1996-01-01
This conference publication includes various abstracts and presentations given at the 13th Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion and Launch Vehicle Technology held at the George C. Marshall Space Flight Center April 25-27 1995. The purpose of the workshop was to discuss experimental and computational fluid dynamic activities in rocket propulsion and launch vehicles. The workshop was an open meeting for government, industry, and academia. A broad number of topics were discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.
Tenth Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion, part 2
NASA Technical Reports Server (NTRS)
Williams, R. W. (Compiler)
1992-01-01
Presented here are 59 abstracts and presentations and three invited presentations given at the Tenth Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion held at the George C. Marshall Space Flight Center, April 28-30, 1992. The purpose of the workshop is to discuss experimental and computational fluid dynamic activities in rocket propulsion. The workshop is an open meeting for government, industry, and academia. A broad number of topics are discussed, including a computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.
Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion, Part 1
NASA Technical Reports Server (NTRS)
Williams, Robert W. (Compiler)
1993-01-01
Conference publication includes 79 abstracts and presentations given at the Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion held at the George C. Marshall Space Flight Center, April 20-22, 1993. The purpose of this workshop is to discuss experimental and computational fluid dynamic activities in rocket propulsion. The workshop is an open meeting for government, industry, and academia. A broad number of topics are discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.
An algorithm for computing the 2D structure of fast rotating stars
Rieutord, Michel; Espinosa Lara, Francisco; Putigny, Bertrand
2016-08-01
Stars may be understood as self-gravitating masses of a compressible fluid whose radiative cooling is compensated by nuclear reactions or gravitational contraction. The understanding of their time evolution requires the use of detailed models that account for a complex microphysics including that of opacities, equation of state and nuclear reactions. The present stellar models are essentially one-dimensional, namely spherically symmetric. However, the interpretation of recent data like the surface abundances of elements or the distribution of internal rotation have reached the limits of validity of one-dimensional models because of their very simplified representation of large-scale fluid flows. In this article, we describe the ESTER code, which is the first code able to compute in a consistent way a two-dimensional model of a fast rotating star including its large-scale flows. Compared to classical 1D stellar evolution codes, many numerical innovations have been introduced to deal with this complex problem. First, the spectral discretization based on spherical harmonics and Chebyshev polynomials is used to represent the 2D axisymmetric fields. A nonlinear mapping maps the spheroidal star and allows a smooth spectral representation of the fields. The properties of Picard and Newton iterations for solving the nonlinear partial differential equations of the problem are discussed. It turns out that the Picard scheme is efficient on the computation of the simple polytropic stars, but Newton algorithm is unsurpassed when stellar models include complex microphysics. Finally, we discuss the numerical efficiency of our solver of Newton iterations. This linear solver combines the iterative Conjugate Gradient Squared algorithm together with an LU-factorization serving as a preconditioner of the Jacobian matrix.
A high-speed algorithm for computation of fractional differentiation and fractional integration.
Fukunaga, Masataka; Shimizu, Nobuyuki
2013-05-13
A high-speed algorithm for computing fractional differentiations and fractional integrations in fractional differential equations is proposed. In this algorithm, the stored data are not the function to be differentiated or integrated but the weighted integrals of the function. The intervals of integration for the memory can be increased without loss of accuracy as the computing time-step n increases. The computing cost varies as n log n, as opposed to n(2) of standard algorithms.
Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry
2014-09-01
UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be
Desai, Bhargav; Hsu, Ying; Schneller, Benjamin; Hobbs, Jonathan G; Mehta, Ankit I; Linninger, Andreas
2016-09-01
Aquaporin-4 (AQP4) channels play an important role in brain water homeostasis. Water transport across plasma membranes has a critical role in brain water exchange of the normal and the diseased brain. AQP4 channels are implicated in the pathophysiology of hydrocephalus, a disease of water imbalance that leads to CSF accumulation in the ventricular system. Many molecular aspects of fluid exchange during hydrocephalus have yet to be firmly elucidated, but review of the literature suggests that modulation of AQP4 channel activity is a potentially attractive future pharmaceutical therapy. Drug therapy targeting AQP channels may enable control over water exchange to remove excess CSF through a molecular intervention instead of by mechanical shunting. This article is a review of a vast body of literature on the current understanding of AQP4 channels in relation to hydrocephalus, details regarding molecular aspects of AQP4 channels, possible drug development strategies, and limitations. Advances in medical imaging and computational modeling of CSF dynamics in the setting of hydrocephalus are summarized. Algorithmic developments in computational modeling continue to deepen the understanding of the hydrocephalus disease process and display promising potential benefit as a tool for physicians to evaluate patients with hydrocephalus.
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.
Tan, Germaine Xin Yi; Jamil, Muhammad; Tee, Nicole Gui Zhen; Zhong, Liang; Yap, Choon Hwai
2015-11-01
Recent animal studies have provided evidence that prenatal blood flow fluid mechanics may play a role in the pathogenesis of congenital cardiovascular malformations. To further these researches, it is important to have an imaging technique for small animal embryos with sufficient resolution to support computational fluid dynamics studies, and that is also non-invasive and non-destructive to allow for subject-specific, longitudinal studies. In the current study, we developed such a technique, based on ultrasound biomicroscopy scans on chick embryos. Our technique included a motion cancelation algorithm to negate embryonic body motion, a temporal averaging algorithm to differentiate blood spaces from tissue spaces, and 3D reconstruction of blood volumes in the embryo. The accuracy of the reconstructed models was validated with direct stereoscopic measurements. A computational fluid dynamics simulation was performed to model fluid flow in the generated construct of a Hamburger-Hamilton (HH) stage 27 embryo. Simulation results showed that there were divergent streamlines and a low shear region at the carotid duct, which may be linked to the carotid duct's eventual regression and disappearance by HH stage 34. We show that our technique has sufficient resolution to produce accurate geometries for computational fluid dynamics simulations to quantify embryonic cardiovascular fluid mechanics.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
NASA Astrophysics Data System (ADS)
Ishii, Katsuya
2011-08-01
This issue includes a special section on computational fluid dynamics (CFD) in memory of the late Professor Kunio Kuwahara, who passed away on 15 September 2008, at the age of 66. In this special section, five articles are included that are based on the lectures and discussions at `The 7th International Nobeyama Workshop on CFD: To the Memory of Professor Kuwahara' held in Tokyo on 23 and 24 September 2009. Professor Kuwahara started his research in fluid dynamics under Professor Imai at the University of Tokyo. His first paper was published in 1969 with the title 'Steady Viscous Flow within Circular Boundary', with Professor Imai. In this paper, he combined theoretical and numerical methods in fluid dynamics. Since that time, he made significant and seminal contributions to computational fluid dynamics. He undertook pioneering numerical studies on the vortex method in 1970s. From then to the early nineties, he developed numerical analyses on a variety of three-dimensional unsteady phenomena of incompressible and compressible fluid flows and/or complex fluid flows using his own supercomputers with academic and industrial co-workers and members of his private research institute, ICFD in Tokyo. In addition, a number of senior and young researchers of fluid mechanics around the world were invited to ICFD and the Nobeyama workshops, which were held near his villa, and they intensively discussed new frontier problems of fluid physics and fluid engineering at Professor Kuwahara's kind hospitality. At the memorial Nobeyama workshop held in 2009, 24 overseas speakers presented their papers, including the talks of Dr J P Boris (Naval Research Laboratory), Dr E S Oran (Naval Research Laboratory), Professor Z J Wang (Iowa State University), Dr M Meinke (RWTH Aachen), Professor K Ghia (University of Cincinnati), Professor U Ghia (University of Cincinnati), Professor F Hussain (University of Houston), Professor M Farge (École Normale Superieure), Professor J Y Yong (National
Lu, Jing; Yu, Jie; Shi, Heshui
2017-01-01
Background Adding functional features to morphological features offers a new method for non-invasive assessment of myocardial perfusion. This study aimed to explore technical routes of assessing the left coronary artery pressure gradient, wall shear stress distribution and blood flow velocity distribution, combining three-dimensional coronary model which was based on high resolution dual-source computed tomography (CT) with computational fluid dynamics (CFD) simulation. Methods Three cases of no obvious stenosis, mild stenosis and severe stenosis in left anterior descending (LAD) were enrolled. Images acquired on dual-source CT were input into software Mimics, ICEMCFD and FLUENT to simulate pressure gradient, wall shear stress distribution and blood flow velocity distribution. Measuring coronary enhancement ratio of coronary artery was to compare with pressure gradient. Results Results conformed to theoretical values and showed difference between normal and abnormal samples. Conclusions The study verified essential parameters and basic techniques in blood flow numerical simulation preliminarily. It was proved feasible. PMID:27924174
Semi-analytic texturing algorithm for polygon computer-generated holograms.
Lee, Wooyoung; Im, Dajeong; Paek, Jeongyeup; Hahn, Joonku; Kim, Hwi
2014-12-15
A texturing method for the semi-analytic polygon computer-generated hologram synthesis algorithm is studied. Through this, the full-potential and development direction of the semi-analytic polygon computer-generated holograms are discussed and compared to that of the conventional numerical algorithm of polygon computer-generated hologram generation based on the fast Fourier transform and bilinear interpolation. The theoretical hurdle of the semi-analytic texturing algorithm is manifested and an approach to resolve this problen. A key mathematical approximation in the angular spectrum computer-generated hologram computation, as well as the trade-offs between texturing effects and computational efficiencies are analyzed through numerical simulation. In this fundamental study, theoretical potential of the semi-analytic polygon computer-generated hologram algorithm is revealed and the ultimate goal of research into the algorithm clarified.
Adaptive implicit-explicit finite element algorithms for fluid mechanics problems
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.
1988-01-01
The adaptive implicit-explicit (AIE) approach is presented for the finite-element solution of various problems in computational fluid mechanics. In the AIE approach, the elements are dynamically (adaptively) arranged into differently treated groups. The differences in treatment could be based on considerations such as the cost efficiency, the type of spatial or temporal discretization employed, the choice of field equations, etc. Several numerical tests are performed to demonstrate that this approach can achieve substantial savings in CPU time and memory.
Optimization of fluid line sizes with pumping power penalty IBM-360 computer program
NASA Technical Reports Server (NTRS)
Jelinek, D.
1972-01-01
Computer program has been developed to calculate and total weights for tubing, fluid in tubing, and weight of fuel cell power source necessary to power pump based on flow rate and pressure drop. Program can be used for fluid systems used in any type of aircraft, spacecraft, trucks, ships, refineries, and chemical processing plants.
Revisiting Newtonian and Non-Newtonian Fluid Mechanics Using Computer Algebra
ERIC Educational Resources Information Center
Knight, D. G.
2006-01-01
This article illustrates how a computer algebra system, such as Maple[R], can assist in the study of theoretical fluid mechanics, for both Newtonian and non-Newtonian fluids. The continuity equation, the stress equations of motion, the Navier-Stokes equations, and various constitutive equations are treated, using a full, but straightforward,…
Computer algorithm for analyzing and processing borehole strainmeter data
Langbein, John O.
2010-01-01
The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.
This paper discusses the status and application of Computational Fluid Dynamics (CFD) models to address challenges for modeling human exposures to air pollutants around urban building microenvironments. There are challenges for more detailed understanding of air pollutant sour...
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.; Farmer, R.
1985-01-01
An analysis for modeling damping seals for use in Space Shuttle main engine turbomachinery is being produced. Development of a computational fluid mechanics code for turbulent, incompressible flow is required.
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
1984-01-01
The pressure solution for incompressible flow was investigated in support of a computational fluid mechanics model which simulates the damping seals considered for use in the space shuttle main engine turbomachinery. Future work directions are discussed briefly.
NASA Astrophysics Data System (ADS)
Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.
2016-09-01
There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21
Suwandecha, Tan; Wongpoowarak, Wibul; Srichana, Teerapol
2016-01-01
Dry powder inhalers (DPIs) are gaining popularity for the delivery of drugs. A cost effective and efficient delivery device is necessary. Developing new DPIs by modifying an existing device may be the simplest way to improve the performance of the devices. The aim of this research was to produce a new DPIs using computational fluid dynamics (CFD). The new DPIs took advantages of the Cyclohaler® and the Rotahaler®. We chose a combination of the capsule chamber of the Cyclohaler® and the mouthpiece and grid of the Rotahaler®. Computer-aided design models of the devices were created and evaluated using CFD. Prototype models were created and tested with the DPI dispersion experiments. The proposed model 3 device had a high turbulence with a good degree of deagglomeration in the CFD and the experiment data. The %fine particle fraction (FPF) was around 50% at 60 L/min. The mass median aerodynamic diameter was around 2.8-4 μm. The FPF were strongly correlated to the CFD-predicted turbulence and the mechanical impaction parameters. The drug retention in the capsule was only 5-7%. In summary, a simple modification of the Cyclohaler® and Rotahaler® could produce a better performing inhaler using the CFD-assisted design.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D.B.
1995-04-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
NASA Astrophysics Data System (ADS)
Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.
1995-03-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.
New algorithms for the symmetric tridiagonal eigenvalue computation
Pan, V. |
1994-12-31
The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.
NASA Astrophysics Data System (ADS)
Malmir, Hessam; Sahimi, Muhammad; Tabar, M. Reza Rahimi
2016-12-01
Packing of cubic particles arises in a variety of problems, ranging from biological materials to colloids and the fabrication of new types of porous materials with controlled morphology. The properties of such packings may also be relevant to problems involving suspensions of cubic zeolites, precipitation of salt crystals during CO2 sequestration in rock, and intrusion of fresh water in aquifers by saline water. Not much is known, however, about the structure and statistical descriptors of such packings. We present a detailed simulation and microstructural characterization of packings of nonoverlapping monodisperse cubic particles, following up on our preliminary results [H. Malmir et al., Sci. Rep. 6, 35024 (2016), 10.1038/srep35024]. A modification of the random sequential addition (RSA) algorithm has been developed to generate such packings, and a variety of microstructural descriptors, including the radial distribution function, the face-normal correlation function, two-point probability and cluster functions, the lineal-path function, the pore-size distribution function, and surface-surface and surface-void correlation functions, have been computed, along with the specific surface and mean chord length of the packings. The results indicate the existence of both spatial and orientational long-range order as the the packing density increases. The maximum packing fraction achievable with the RSA method is about 0.57, which represents the limit for a structure similar to liquid crystals.
A constrained conjugate gradient algorithm for computed tomography
Azevedo, S.G.; Goodman, D.M.
1994-11-15
Image reconstruction from projections of x-ray, gamma-ray, protons and other penetrating radiation is a well-known problem in a variety of fields, and is commonly referred to as computed tomography (CT). Various analytical and series expansion methods of reconstruction and been used in the past to provide three-dimensional (3D) views of some interior quantity. The difficulties of these approaches lie in the cases where (a) the number of views attainable is limited, (b) the Poisson (or other) uncertainties are significant, (c) quantifiable knowledge of the object is available, but not implementable, or (d) other limitations of the data exist. We have adapted a novel nonlinear optimization procedure developed at LLNL to address limited-data image reconstruction problems. The technique, known as nonlinear least squares with general constraints or constrained conjugate gradients (CCG), has been successfully applied to a number of signal and image processing problems, and is now of great interest to the image reconstruction community. Previous applications of this algorithm to deconvolution problems and x-ray diffraction images for crystallography have shown the great promise.
Computational modeling of fully-ionized, magnetized plasmas using the fluid approximation
NASA Astrophysics Data System (ADS)
Schnack, Dalton
2005-10-01
Strongly magnetized plasmas are rich in spatial and temporal scales, making a computational approach useful for studying these systems. The most accurate model of a magnetized plasma is based on a kinetic equation that describes the evolution of the distribution function for each species in six-dimensional phase space. However, the high dimensionality renders this approach impractical for computations for long time scales in relevant geometry. Fluid models, derived by taking velocity moments of the kinetic equation [1] and truncating (closing) the hierarchy at some level, are an approximation to the kinetic model. The reduced dimensionality allows a wider range of spatial and/or temporal scales to be explored. Several approximations have been used [2-5]. Successful computational modeling requires understanding the ordering and closure approximations, the fundamental waves supported by the equations, and the numerical properties of the discretization scheme. We review and discuss several ordering schemes, their normal modes, and several algorithms that can be applied to obtain a numerical solution. The implementation of kinetic parallel closures is also discussed [6].[1] S. Chapman and T.G. Cowling, ``The Mathematical Theory of Non-Uniform Gases'', Cambridge University Press, Cambridge, UK (1939).[2] R.D. Hazeltine and J.D. Meiss, ``Plasma Confinement'', Addison-Wesley Publishing Company, Redwood City, CA (1992).[3] L.E. Sugiyama and W. Park, Physics of Plasmas 7, 4644 (2000).[4] J.J. Ramos, Physics of Plasmas, 10, 3601 (2003).[5] P.J. Catto and A.N. Simakov, Physics of Plasmas, 11, 90 (2004).[6] E.D. Held et al., Phys. Plasmas 11, 2419 (2004)
NASA Technical Reports Server (NTRS)
Murman, E. M. (Editor); Abarbanel, S. S. (Editor)
1985-01-01
Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.
NASA Astrophysics Data System (ADS)
Lei, Weiwei; Li, Kai
2016-12-01
There are four recursive algorithms used in the computation of the fully normalized associated Legendre functions (FNALFs): the standard forward column algorithm, the standard forward row algorithm, the recursive algorithm between every other degree, and the Belikov algorithm. These algorithms were evaluated in terms of their first relative numerical accuracy, second relative numerical accuracy, and computation speed and efficiency. The results show that when the degree n reaches 3000, both the recursive algorithm between every other degree and the Belikov algorithm are applicable for | cos θ | ∈[0, 1], with the latter better second relative numerical accuracy than the former at a slower computation speed. In terms of | cos θ | ∈[0, 1], the standard forward column algorithm, the recursive algorithm between every other degree, and the Belikov algorithm are applicable within degree n of 1900, and the standard forward column algorithm has the highest computation speed. The standard forward column algorithm is applicable for | cos θ | ∈[0, 1] within degree n of 1900. This algorithm's range of applicability decreases as the degree increases beyond 1900; however, it remains applicable within a minute range when | cos θ | is approximately equal to 1. The standard forward row algorithm has the smallest range of applicability: it is only applicable within degree n of 100 for | cos θ | ∈[0, 1], and its range of applicability decreases rapidly when the degree is greater than 100. The results of this research are expected to be useful to researchers in choosing the best algorithms for use in the computation of the FNALFs.
Quantum computation: algorithms and implementation in quantum dot devices
NASA Astrophysics Data System (ADS)
Gamble, John King
In this thesis, we explore several aspects of both the software and hardware of quantum computation. First, we examine the computational power of multi-particle quantum random walks in terms of distinguishing mathematical graphs. We study both interacting and non-interacting multi-particle walks on strongly regular graphs, proving some limitations on distinguishing powers and presenting extensive numerical evidence indicative of interactions providing more distinguishing power. We then study the recently proposed adiabatic quantum algorithm for Google PageRank, and show that it exhibits power-law scaling for realistic WWW-like graphs. Turning to hardware, we next analyze the thermal physics of two nearby 2D electron gas (2DEG), and show that an analogue of the Coulomb drag effect exists for heat transfer. In some distance and temperature, this heat transfer is more significant than phonon dissipation channels. After that, we study the dephasing of two-electron states in a single silicon quantum dot. Specifically, we consider dephasing due to the electron-phonon coupling and charge noise, separately treating orbital and valley excitations. In an ideal system, dephasing due to charge noise is strongly suppressed due to a vanishing dipole moment. However, introduction of disorder or anharmonicity leads to large effective dipole moments, and hence possibly strong dephasing. Building on this work, we next consider more realistic systems, including structural disorder systems. We present experiment and theory, which demonstrate energy levels that vary with quantum dot translation, implying a structurally disordered system. Finally, we turn to the issues of valley mixing and valley-orbit hybridization, which occurs due to atomic-scale disorder at quantum well interfaces. We develop a new theoretical approach to study these effects, which we name the disorder-expansion technique. We demonstrate that this method successfully reproduces atomistic tight-binding techniques
Computational Fluid Dynamics Results for a 25-mm Projectile
2007-09-01
with jet cavity (top view). 3.2 Computational Mesh The grids for the computational models were created using GRIDGEN (6), a commercially...Aberdeen Proving Ground, MD, September 2004. 6. Pointwise, Inc. GRIDGEN Version 15 On-line User’s Manual; Bedford, TX, 2005. 13 NO. OF COPIES
In this paper we develop and computationally test three implicit enumeration algorithms for solving the asymmetric traveling salesman problem. All...three algorithms use the assignment problem relaxation of the traveling salesman problem with subtour elimination similar to the previous approaches by...previous subtour elimination algorithms and (2) the 1-arborescence approach of Held and Karp for the asymmetric traveling salesman problem.
Computing the Thermodynamic State of a Cryogenic Fluid
NASA Technical Reports Server (NTRS)
Willen, G. Scott; Hanna, Gregory J.; Anderson, Kevin R.
2005-01-01
The Cryogenic Tank Analysis Program (CTAP) predicts the time-varying thermodynamic state of a cryogenic fluid in a tank or a Dewar flask. CTAP is designed to be compatible with EASY5x, which is a commercial software package that can be used to simulate a variety of processes and equipment systems. The mathematical model implemented in CTAP is a first-order differential equation for the pressure as a function of time.
NASA Astrophysics Data System (ADS)
Chen, Yufeng; Wu, Zebin; Sun, Le; Wei, Zhihui; Li, Yonglong
2016-04-01
With the gradual increase in the spatial and spectral resolution of hyperspectral images, the size of image data becomes larger and larger, and the complexity of processing algorithms is growing, which poses a big challenge to efficient massive hyperspectral image processing. Cloud computing technologies distribute computing tasks to a large number of computing resources for handling large data sets without the limitation of memory and computing resource of a single machine. This paper proposes a parallel pixel purity index (PPI) algorithm for unmixing massive hyperspectral images based on a MapReduce programming model for the first time in the literature. According to the characteristics of hyperspectral images, we describe the design principle of the algorithm, illustrate the main cloud unmixing processes of PPI, and analyze the time complexity of serial and parallel algorithms. Experimental results demonstrate that the parallel implementation of the PPI algorithm on the cloud can effectively process big hyperspectral data and accelerate the algorithm.
Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions
ERIC Educational Resources Information Center
Torbeyns, Joke; Verschaffel, Lieven
2016-01-01
This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…
Computational modeling of fluid structural interaction in arterial stenosis
NASA Astrophysics Data System (ADS)
Bali, Leila; Boukedjane, Mouloud; Bahi, Lakhdar
2013-12-01
Atherosclerosis affects the arterial blood vessels causing stenosis because of which the artery hardens resulting in loss of elasticity in the affected region. In this paper, we present: an approach to model the fluid-structure interaction through such an atherosclerosis affected region of the artery, The blood is assumed as an incompressible Newtonian viscous fluid, and the vessel wall was treated as a thick-walled, incompressible and isotropic material with uniform mechanical properties. The numerical simulation has been studied in the context of The Navier-Stokes equations for an interaction with an elastic solid. The study of fluid flow and wall motion was initially carried out separately, Discretized forms of the transformed wall and flow equations, which are coupled through the boundary conditions at their interface, are obtained by control volume method and simultaneously to study the effects of wall deformability, solutions are obtained for both rigid and elastic walls. The results indicate that deformability of the wall causes an increase in the time average of pressure drop, but a decrease in the maximum wall shear stress. Displacement and stress distributions in the wall are presented.
Computations of fluid mixtures including solid carbon at chemical equilibrium
NASA Astrophysics Data System (ADS)
Bourasseau, Emeric
2013-06-01
One of the key points of the understanding of detonation phenomena is the determination of equation of state of the detonation products mixture. Concerning carbon rich explosives, detonation products mixtures are composed of solid carbon nano-clusters immersed in a high density fluid phase. The study of such systems where both chemical and phase equilibriums occur simultaneously represents an important challenge and molecular simulation methods appear to be one of the more promising way to obtain some answers. In this talk, the Reaction Ensemble Monte Carlo (RxMC) method will be presented. This method allows the system to reach the chemical equilibrium of a mixture driven by a set of linearly independent chemical equations. Applied to detonation product mixtures, it allows the calculation of the chemical composition of the mixture and its thermodynamic properties. Moreover, an original model has been proposed to take explicitly into account a solid carbon meso-particle in thermodynamic and chemical equilibrium with the fluid. Finally our simulations show that the intrinsic inhomogeneous nature of the system (i.e. the fact that the solid phase is immersed in the fluid phase) has an important impact on the thermodynamic properties, and as a consequence must be taken into account.
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis E.
2013-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions
Reading, Writing and Algorithms: Computer Literacy in the Schools.
ERIC Educational Resources Information Center
Neufeld, Helen H.
Given the state of the art of computing in 1982, it is not necessary to know a computer language to use a computer. Three aspects of the current state of computing make it mandatory that educators from elementary through postsecondary levels rapidly incorporate this skill into the curriculum: (1) computers have permeated society--they are used in…
Hung, Peter W; Paik, David S; Napel, Sandy; Yee, Judy; Jeffrey, R Brooke; Steinauer-Gebauer, Andreas; Min, Juno; Jathavedam, Ashwin; Beaulieu, Christopher F
2002-02-01
Three bowel distention-measuring algorithms for use at computed tomographic (CT) colonography were developed, validated in phantoms, and applied to a human CT colonographic data set. The three algorithms are the cross-sectional area method, the moving spheres method, and the segmental volume method. Each algorithm effectively quantified distention, but accuracy varied between methods. Clinical feasibility was demonstrated. Depending on the desired spatial resolution and accuracy, each algorithm can quantitatively depict colonic diameter in CT colonography.
Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement
NASA Astrophysics Data System (ADS)
Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui
2017-01-01
Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.
Liu, Jia; Yan, Zhengzheng; Pu, Yuehua; Shiu, Wen-Shin; Wu, Jianhuang; Chen, Rongliang; Leng, Xinyi; Qin, Haiqiang; Liu, Xin; Jia, Baixue; Song, Ligang; Wang, Yilong; Miao, Zhongrong; Wang, Yongjun; Liu, Liping; Cai, Xiao-Chuan
2016-10-04
The fractional pressure ratio is introduced to quantitatively assess the hemodynamic significance of severe intracranial stenosis. A computational fluid dynamics-based method is proposed to non-invasively compute the FPRCFD and compared against fractional pressure ratio measured by an invasive technique. Eleven patients with severe intracranial stenosis considered for endovascular intervention were recruited and an invasive procedure was performed to measure the distal and the aortic pressure (Pd and Pa). The fractional pressure ratio was calculated as [Formula: see text] The computed tomography angiography was used to reconstruct three-dimensional (3D) arteries for each patient. Cerebral hemodynamics was then computed for the arteries using a mathematical model governed by Navier-Stokes equations and with the outflow conditions imposed by a model of distal resistance and compliance. The non-invasive [Formula: see text], [Formula: see text], and FPRCFD were then obtained from the computational fluid dynamics calculation using a 16-core parallel computer. The invasive and non-invasive parameters were tested by statistical analysis. For this group of patients, the computational fluid dynamics method achieved comparable results with the invasive measurements. The fractional pressure ratio and FPRCFD are very close and highly correlated, but not linearly proportional, with the percentage of stenosis. The proposed computational fluid dynamics method can potentially be useful in assessing the functional alteration of cerebral stenosis.
Computational Fluid Dynamics of the Boundary Layer Characteristics of a Pacific Bluefin Tuna
2015-09-18
Underwater Vehicle CAD Computer-Aided Design CFD Computational Fluid Dynamics FEA Finite Element Analysis IGES Initial Graphics Exchange...finite element analysis ( FEA ) solvers, but in recent years it has made strides in improving its CFD meshing capabilities. While some CAD software
Teaching Computer-Aided Design of Fluid Flow and Heat Transfer Engineering Equipment.
ERIC Educational Resources Information Center
Gosman, A. D.; And Others
1979-01-01
Describes a teaching program for fluid mechanics and heat transfer which contains both computer aided learning (CAL) and computer aided design (CAD) components and argues that the understanding of the physical and numerical modeling taught in the CAL course is essential to the proper implementation of CAD. (Author/CMV)
NASA Astrophysics Data System (ADS)
Bitter, Ingmar; Brown, John E.; Brickman, Daniel; Summers, Ronald M.
2004-04-01
The presented method significantly reduces the time necessary to validate a computed tomographic colonography (CTC) computer aided detection (CAD) algorithm of colonic polyps applied to a large patient database. As the algorithm is being developed on Windows PCs and our target, a Beowulf cluster, is running on Linux PCs, we made the application dual platform compatible using a single source code tree. To maintain, share, and deploy source code, we used CVS (concurrent versions system) software. We built the libraries from their sources for each operating system. Next, we made the CTC CAD algorithm dual-platform compatible and validate that both Windows and Linux produced the same results. Eliminating system dependencies was mostly achieved using the Qt programming library, which encapsulates most of the system dependent functionality in order to present the same interface on either platform. Finally, we wrote scripts to execute the CTC CAD algorithm in parallel. Running hundreds of simultaneous copies of the CTC CAD algorithm on a Beowulf cluster computing network enables execution in less than four hours on our entire collection of over 2400 CT scans, as compared to a month a single PC. As a consequence, our complete patient database can be processed daily, boosting research productivity. Large scale validation of a computer aided polyp detection algorithm for CT colonography using cluster computing significantly improves the round trip time of algorithm improvement and revalidation.
Software Design Strategies for Multidisciplinary Computational Fluid Dynamics
2012-07-01
fuselage. The NSU3D flow solver provides two options for such modeling of boundary-layer turbulence . The first is a single-equation Spalart - Allmaras ...Analysis,” AIAA Journal of Aircraft, Vol. 36, No. 6, 1999, pp. 987-998. [14] Spalart , P. R., and S. R. Allmaras , S., “A One-equation Turbulence ...applications [12,13]. The NSU3D discretization scheme employs a second-order accurate vertex-based approach, which stores the unknown fluid and turbulence
Quaini, A.; Canic, S.; Glowinski, R.; Igo, S.; Hartley, C.J.; Zoghbi, W.; Little, S.
2011-01-01
This work presents a validation of a fluid-structure interaction computational model simulating the flow conditions in an in vitro mock heart chamber modeling mitral valve regurgitation during the ejection phase during which the trans-valvular pressure drop and valve displacement are not as large. The mock heart chamber was developed to study the use of 2D and 3D color Doppler techniques in imaging the clinically relevant complex intra-cardiac flow events associated with mitral regurgitation. Computational models are expected to play an important role in supporting, refining, and reinforcing the emerging 3D echocardiographic applications. We have developed a 3D computational fluid-structure interaction algorithm based on a semi-implicit, monolithic method, combined with an arbitrary Lagrangian-Eulerian approach to capture the fluid domain motion. The mock regurgitant mitral valve corresponding to an elastic plate with a geometric orifice, was modeled using 3D elasticity, while the blood flow was modeled using the 3D Navier-Stokes equations for an incompressible, viscous fluid. The two are coupled via the kinematic and dynamic conditions describing the two-way coupling. The pressure, the flow rate, and orifice plate displacement were measured and compared with numerical simulation results. In-line flow meter was used to measure the flow, pressure transducers were used to measure the pressure, and a Doppler method developed by one of the authors was used to measure the axial displacement of the orifice plate. The maximum recorded difference between experiment and numerical simulation for the flow rate was 4%, the pressure 3.6%, and for the orifice displacement 15%, showing excellent agreement between the two. PMID:22138194
Computational fluid dynamics studies of nuclear rocket performance
NASA Technical Reports Server (NTRS)
Stubbs, Robert M.; Kim, Suk C.; Benson, Thomas J.
1994-01-01
A CFD analysis of a low pressure nuclear rocket concept is presented with the use of an advanced chemical kinetics, Navier-Stokes code. The computations describe the flow field in detail, including gas dynamic, thermodynamic and chemical properties, as well as global performance quantities such as specific impulse. Computational studies of several rocket nozzle shapes are conducted in an attempt to maximize hydrogen recombination. These Navier-Stokes calculations, which include real gas and viscous effects, predict lower performance values than have been reported heretofore.
Suggested architecture for a specialized fluid dynamics computer
NASA Technical Reports Server (NTRS)
Fornberg, B.
1978-01-01
Future flow simulations in 3-D will require computers with extremely large main memories and an advantageous ratio between computer cost and arithmetic speed. Since random access memories are very expensive, a pipeline design is proposed which allows the use of much cheaper sequential devices without any sacrifice in speed for vector references (even with arbitrary spacing between successive elements). Also scalar arithmetic can be performed efficiently. The comparatively low speed of the proposed machine (about 10 to the 7th power operations per second) would be offset by a very low price per unit, making mass production possible.
Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges
NASA Technical Reports Server (NTRS)
Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.
2000-01-01
This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.
SSME structural computer program development: BOPACE theoretical manual, addendum. [algorithms
NASA Technical Reports Server (NTRS)
1975-01-01
An algorithm developed and incorporated into BOPACE for improving the convergence and accuracy of the inelastic stress-strain calculations is discussed. The implementation of separation of strains in the residual-force iterative procedure is defined. The elastic-plastic quantities used in the strain-space algorithm are defined and compared with previous quantities.
Teaching Computation in Primary School without Traditional Written Algorithms
ERIC Educational Resources Information Center
Hartnett, Judy
2015-01-01
Concerns regarding the dominance of the traditional written algorithms in schools have been raised by many mathematics educators, yet the teaching of these procedures remains a dominant focus in in primary schools. This paper reports on a project in one school where the staff agreed to put the teaching of the traditional written algorithm aside,…
Shahmohammadi Beni, Mehrdad; Yu, K N
2015-12-14
A promising application of plasma medicine is to treat living cells and tissues with cold plasma. In cold plasmas, the fraction of neutrals dominates, so the carrier gas could be considered the main component. In many realistic situations, the treated cells are covered by a fluid. The present paper developed models to determine the temperature of the fluid at the positions of the treated cells. Specifically, the authors developed a three-phase-interaction model which was coupled with heat transfer to examine the injection of the helium carrier gas into water and to investigate both the fluid dynamics and heat transfer output variables, such as temperature, in three phases, i.e., air, helium gas, and water. Our objective was to develop a model to perform complete fluid dynamics and heat transfer computations to determine the temperature at the surface of living cells. Different velocities and plasma temperatures were also investigated using finite element method, and the model was built using the comsol multiphysics software. Using the current model to simulate plasma injection into such systems, the authors were able to investigate the temperature distributions in the domain, as well as the surface and bottom boundary of the medium in which cells were cultured. The temperature variations were computed at small time intervals to analyze the temperature increase in cell targets that could be highly temperature sensisitve. Furthermore, the authors were able to investigate the volume of the plasma plume and its effects on the average temperature of the medium layer/domain. Variables such as temperature and velocity at the cell layer could be computed, and the variations due to different plume sizes could be determined. The current models would be very useful for future design of plasma medicine devices and procedures involving cold plasmas.
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Khramushin, Vasily
2016-02-01
The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.
A Parallel Computational Fluid Dynamics Unstructured Grid Generator
1993-12-01
Vol 11. 953-961. Philadelphia: SIAM, 1993. Holey, J. Andrew and Oscar H. Ibarra . "Triangulation, Veronoi Diagram, and Convex Hull in k-Space on Mesh...rIdhner, Rainald, Jose Camberos, and Marshall Merriam. "Parallel Unstructured Grid Generation," in Unstructured Scientific Computation on Scalable
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
Amirfattahi, Rassoul
2013-10-01
Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.
NASA Astrophysics Data System (ADS)
Hirsch, Ch.; Periaux, J.; Onate, E.
A conference on computational fluid dynamics and numerical methods in engineering produced on topics that included turbulent flows, combustion, hypersonic reacting flows, atmospheric dispersion, multiphase flows, grid generation and adapation, numerical modeling of composite structures, shape optimization, semiconductors, and domain decomposition methods. For individual titles, see A95-87553 through A95-87567.
Fast algorithms for visualizing fluid motion in steady flow on unstructured grids
NASA Technical Reports Server (NTRS)
Ueng, S. K.; Sikorski, K.; Ma, Kwan-Liu
1995-01-01
The plotting of streamlines is an effective way of visualizing fluid motion in steady flows. Additional information about the flowfield, such as local rotation and expansion, can be shown by drawing in the form of a ribbon or tube. In this paper, we present efficient algorithms for the construction of streamlines, streamribbons and streamtubes on unstructured grids. A specialized version of the Runge-Kutta method has been developed to speed up the integration of particle paths. We have also derived closed-form solutions for calculating angular rotation rate and radius to construct streamribbons and streamtubes, respectively. According to our analysis and test results, these formulations are two to four times better in performance than previous numerical methods. As a large number of traces are calculated, the improved performance could be significant.
Masoumi, Nafiseh; Framanzad, F; Zamanian, Behnam; Seddighi, A S; Moosavi, M H; Najarian, S; Bastani, Dariush
2013-01-01
Many diseases are related to cerebrospinal fluid (CSF) hydrodynamics. Therefore, understanding the hydrodynamics of CSF flow and intracranial pressure is helpful for obtaining deeper knowledge of pathological processes and providing better treatments. Furthermore, engineering a reliable computational method is promising approach for fabricating in vitro models which is essential for inventing generic medicines. A Fluid-Solid Interaction (FSI)model was constructed to simulate CSF flow. An important problem in modeling the CSF flow is the diastolic back flow. In this article, using both rigid and flexible conditions for ventricular system allowed us to evaluate the effect of surrounding brain tissue. Our model assumed an elastic wall for the ventricles and a pulsatile CSF input as its boundary conditions. A comparison of the results and the experimental data was done. The flexible model gave better results because it could reproduce the diastolic back flow mentioned in clinical research studies. The previous rigid models have ignored the brain parenchyma interaction with CSF and so had not reported the back flow during the diastolic time. In this computational fluid dynamic (CFD) analysis, the CSF pressure and flow velocity in different areas were concordant with the experimental data.
Masoumi, Nafiseh; Framanzad, F.; Zamanian, Behnam; Seddighi, A.S.; Moosavi, M.H.; Najarian, S.; Bastani, Dariush
2013-01-01
Many diseases are related to cerebrospinal fluid (CSF) hydrodynamics. Therefore, understanding the hydrodynamics of CSF flow and intracranial pressure is helpful for obtaining deeper knowledge of pathological processes and providing better treatments. Furthermore, engineering a reliable computational method is promising approach for fabricating in vitro models which is essential for inventing generic medicines. A Fluid-Solid Interaction (FSI)model was constructed to simulate CSF flow. An important problem in modeling the CSF flow is the diastolic back flow. In this article, using both rigid and flexible conditions for ventricular system allowed us to evaluate the effect of surrounding brain tissue. Our model assumed an elastic wall for the ventricles and a pulsatile CSF input as its boundary conditions. A comparison of the results and the experimental data was done. The flexible model gave better results because it could reproduce the diastolic back flow mentioned in clinical research studies. The previous rigid models have ignored the brain parenchyma interaction with CSF and so had not reported the back flow during the diastolic time. In this computational fluid dynamic (CFD) analysis, the CSF pressure and flow velocity in different areas were concordant with the experimental data. PMID:25337330
Mitamura, Yoshinori; Yano, Tetsuya; Okamoto, Eiji
2013-01-01
A magnetic fluid (MF) seal has excellent durability. The performance of an MF seal, however, has been reported to decrease in liquids (several days). We have developed an MF seal that has a shield mechanism. The seal was perfect for 275 days in water. To investigate the effect of a shield, behaviors of MFs in a seal in water were studied both experimentally and computationally. (a) Two kinds of MF seals, one with a shield and one without a shield, were installed in a centrifugal pump. Behaviors of MFs in the seals in water were observed with a video camera and high-speed microscope. In the seal without a shield, the surface of the water in the seal waved and the turbulent flow affected behaviors of the MFs. In contrast, MFs rotated stably in the seal with a shield in water even at high rotational speeds. (b) Computational fluid dynamics analysis revealed that a stationary secondary flow pattern in the seal and small velocity difference between magnetic fluid and water at the interface. These MF behaviors prolonged the life of an MF seal in water.
NASA Astrophysics Data System (ADS)
Wang, Tianyang; Wüchner, Roland; Sicklinger, Stefan; Bletzinger, Kai-Uwe
2016-05-01
This paper investigates data mapping between non-matching meshes and geometries in fluid-structure interaction. Mapping algorithms for surface meshes including nearest element interpolation, the standard mortar method and the dual mortar method are studied and comparatively assessed. The inconsistency problem of mortar methods at curved edges of fluid-structure-interfaces is solved by a newly developed enforcing consistency approach, which is robust enough to handle even the case that fluid boundary facets are totally not in contact with structure boundary elements due to high fluid refinement. Besides, tests with representative geometries show that the mortar methods are suitable for conservative mapping but it is better to use the nearest element interpolation in a direct way, and moreover, the dual mortar method can give slight oscillations. This work also develops a co-rotating mapping algorithm for 1D beam elements. Its novelty lies in the ability of handling large displacements and rotations.
The development of an intelligent interface to a computational fluid dynamics flow-solver code
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1988-01-01
Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, 3-D, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.
The development of an intelligent interface to a computational fluid dynamics flow-solver code
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1988-01-01
Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, three-dimensional, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.
NASA Technical Reports Server (NTRS)
Mccarty, R. D.
1980-01-01
The thermodynamic and transport properties of selected cryogens had programmed into a series of computer routines. Input variables are any two of P, rho or T in the single phase regions and either P or T for the saturated liquid or vapor state. The output is pressure, density, temperature, entropy, enthalpy for all of the fluids and in most cases specific heat capacity and speed of sound. Viscosity and thermal conductivity are also given for most of the fluids. The programs are designed for access by remote terminal; however, they have been written in a modular form to allow the user to select either specific fluids or specific properties for particular needs. The program includes properties for hydrogen, helium, neon, nitrogen, oxygen, argon, and methane. The programs include properties for gaseous and liquid states usually from the triple point to some upper limit of pressure and temperature which varies from fluid to fluid.
1983-01-01
COMPUTER ALGORITHM USED IN COMPUTING THE FINAL W 15/16 CONSTANT 0.7 ATA OXYGEN PARTIAL . PERFORING ORG. REPORT MUNDER PRESSURE DECOMPRESSION TABLES 7...earlier Model Parameter Input Files bad only one subfile which could then be read and printed before an end of file is encounted and the program stops
Mixed-radix Algorithm for the Computation of Forward and Inverse MDCT
Wu, Jiasong; Shu, Huazhong; Senhadji, Lotfi; Luo, Limin
2008-01-01
The modified discrete cosine transform (MDCT) and inverse MDCT (IMDCT) are two of the most computational intensive operations in MPEG audio coding standards. A new mixed-radix algorithm for efficient computing the MDCT/IMDCT is presented. The proposed mixed-radix MDCT algorithm is composed of two recursive algorithms. The first algorithm, called the radix-2 decimation in frequency (DIF) algorithm, is obtained by decomposing an N-point MDCT into two MDCTs with the length N/2. The second algorithm, called the radix-3 decimation in time (DIT) algorithm, is obtained by decomposing an N-point MDCT into three MDCTs with the length N/3. Since the proposed MDCT algorithm is also expressed in the form of a simple sparse matrix factorization, the corresponding IMDCT algorithm can be easily derived by simply transposing the matrix factorization. Comparison of the proposed algorithm with some existing ones shows that our proposed algorithm is more suitable for parallel implementation and especially suitable for the layer III of MPEG-1 and MPEG-2 audio encoding and decoding. Moreover, the proposed algorithm can be easily extended to the multidimensional case by using the vector-radix method. PMID:21258639
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Algorithm development for Maxwell's equations for computational electromagnetism
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.
1990-01-01
A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.
Fast algorithm for automatically computing Strahler stream order
Lanfear, Kenneth J.
1990-01-01
An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.
Efficient Algorithms for Computing Stackelberg Strategies in Security Games
2012-05-30
Korzhyk, Ondrej Vanek , Vincent Conitzer, Michal Pechoucek, Milind Tambe. A double oracle algorithm for zero-sum security games on graphs, Proceedings...average over many randomly drawn games, the benefits from commitment tend to be much less extreme. In another AAMAS paper (Jain, Korzhyk, Vanek ...Korzhyk, Ondrej Vanek , Vincent Conitzer, Michal Pe- choucek, and Milind Tambe. A double oracle algorithm for zero-sum security games on graphs. In
A Simple Physical Optics Algorithm Perfect for Parallel Computing Architecture
NASA Technical Reports Server (NTRS)
Imbriale, W. A.; Cwik, T.
1994-01-01
A reflector antenna computer program based upon a simple discreet approximation of the radiation integral has proven to be extremely easy to adapt to the parallel computing architecture of the modest number of large-gain computing elements such as are used in the Intel iPSC and Touchstone Delta parallel machines.
Topics in Computational Learning Theory and Graph Algorithms.
ERIC Educational Resources Information Center
Board, Raymond Acton
This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…
NASA Technical Reports Server (NTRS)
Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,
2004-01-01
This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.
Multi-Rate Digital Control Systems with Simulation Applications. Volume II. Computer Algorithms
1980-09-01
34 ~AFWAL-TR-80-31 01 • • Volume II L IL MULTI-RATE DIGITAL CONTROL SYSTEMS WITH SIMULATiON APPLICATIONS Volume II: Computer Algorithms DENNIS G. J...29 Ma -8 - Volume II. Computer Algorithms ~ / ’+ 44MWLxkQT N Uwe ~~ 4 ~jjskYIF336l5-79-C-369~ 9. PER~rORMING ORGANIZATION NAME AND ADDRESS IPROG AMEL...additional options. The analytical basis for the computer algorithms is discussed in Ref. 12. However, to provide a complete description of the program, some
Nascov, Victor; Logofătu, Petre Cătălin
2009-08-01
We describe a fast computational algorithm able to evaluate the Rayleigh-Sommerfeld diffraction formula, based on a special formulation of the convolution theorem and the fast Fourier transform. What is new in our approach compared to other algorithms is the use of a more general type of convolution with a scale parameter, which allows for independent sampling intervals in the input and output computation windows. Comparison between the calculations made using our algorithm and direct numeric integration show a very good agreement, while the computation speed is increased by orders of magnitude.
NASA Astrophysics Data System (ADS)
Kratzke, Jonas; Rengier, Fabian; Weis, Christian; Beller, Carsten J.; Heuveline, Vincent
2016-04-01
Initiation and development of cardiovascular diseases can be highly correlated to specific biomechanical parameters. To examine and assess biomechanical parameters, numerical simulation of cardiovascular dynamics has the potential to complement and enhance medical measurement and imaging techniques. As such, computational fluid dynamics (CFD) have shown to be suitable to evaluate blood velocity and pressure in scenarios, where vessel wall deformation plays a minor role. However, there is a need for further validation studies and the inclusion of vessel wall elasticity for morphologies being subject to large displacement. In this work, we consider a fluid-structure interaction (FSI) model including the full elasticity equation to take the deformability of aortic wall soft tissue into account. We present a numerical framework, in which either a CFD study can be performed for less deformable aortic segments or an FSI simulation for regions of large displacement such as the aortic root and arch. Both of the methods are validated by means of an aortic phantom experiment. The computational results are in good agreement with 2D phase-contrast magnetic resonance imaging (PC-MRI) velocity measurements as well as catheter-based pressure measurements. The FSI simulation shows a characteristic vessel compliance effect on the flow field induced by the elasticity of the vessel wall, which the CFD model is not capable of. The in vitro validated FSI simulation framework can enable the computation of complementary biomechanical parameters such as the stress distribution within the vessel wall.
The coupling of fluids, dynamics, and controls on advanced architecture computers
NASA Technical Reports Server (NTRS)
Atwood, Christopher
1995-01-01
This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.
Lawson, Mi. J.; Li, Y.; Sale, D. C.
2011-01-01
This paper describes the development of a computational fluid dynamics (CFD) methodology to simulate the hydrodynamics of horizontal-axis tidal current turbines (HATTs). First, an HATT blade was designed using the blade element momentum method in conjunction with a genetic optimization algorithm. Several unstructured computational grids were generated using this blade geometry and steady CFD simulations were used to perform a grid resolution study. Transient simulations were then performed to determine the effect of time-dependent flow phenomena and the size of the computational timestep on the numerical solution. Qualitative measures of the CFD solutions were independent of the grid resolution. Conversely, quantitative comparisons of the results indicated that the use of coarse computational grids results in an under prediction of the hydrodynamic forces on the turbine blade in comparison to the forces predicted using more resolved grids. For the turbine operating conditions considered in this study, the effect of the computational timestep on the CFD solution was found to be minimal, and the results from steady and transient simulations were in good agreement. Additionally, the CFD results were compared to corresponding blade element momentum method calculations and reasonable agreement was shown. Nevertheless, we expect that for other turbine operating conditions, where the flow over the blade is separated, transient simulations will be required.
NASA Astrophysics Data System (ADS)
Reif, John H.; Tyagi, Akhilesh
1997-10-01
Optical-computing technology offers new challenges to algorithm designers since it can perform an n -point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT VLSIO (very-large-scale integrated optics) and the DFT circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat and Reif Appl. Opt. 26, 1015 (1987) and by Tyagi and Reif Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14 .
Internal computational fluid mechanics on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Andersen, Bernhard H.; Benson, Thomas J.
1987-01-01
The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Techniques for grid manipulation and adaptation. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.
1992-01-01
Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-06-01
We present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh-Ritz calculations compared to existing algorithms such as the locally optimal block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.
Reconciling fault-tolerant distributed algorithms and real-time computing.
Moser, Heinrich; Schmid, Ulrich
We present generic transformations, which allow to translate classic fault-tolerant distributed algorithms and their correctness proofs into a real-time distributed computing model (and vice versa). Owing to the non-zero-time, non-preemptible state transitions employed in our real-time model, scheduling and queuing effects (which are inherently abstracted away in classic zero step-time models, sometimes leading to overly optimistic time complexity results) can be accurately modeled. Our results thus make fault-tolerant distributed algorithms amenable to a sound real-time analysis, without sacrificing the wealth of algorithms and correctness proofs established in classic distributed computing research. By means of an example, we demonstrate that real-time algorithms generated by transforming classic algorithms can be competitive even w.r.t. optimal real-time algorithms, despite their comparatively simple real-time analysis.
Impact of Multiscale Retinex Computation on Performance of Segmentation Algorithms
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2004-01-01
Classical segmentation algorithms subdivide an image into its constituent components based upon some metric that defines commonality between pixels. Often, these metrics incorporate some measure of "activity" in the scene, e.g. the amount of detail that is in a region. The Multiscale Retinex with Color Restoration (MSRCR) is a general purpose, non-linear image enhancement algorithm that significantly affects the brightness, contrast and sharpness within an image. In this paper, we will analyze the impact the MSRCR has on segmentation results and performance.
Computer program for fast Karhunen Loeve transform algorithm
NASA Technical Reports Server (NTRS)
Jain, A. K.
1976-01-01
The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data. The performance criteria used here are mean square error and signal to noise ratio. The results obtained show a superior performance of the fast KL transform coding algorithm on the data set used with respect to the above stated perfomance criteria. A summary of the results is given in Chapter I and details of comparisons and discussion on conclusions are given in Chapter IV.
1986-10-01
these theorems to find steady-state solutions of Markov chains are analysed. The results obtained in this way are then applied to quasi birth-death processes. Keywords: computations; algorithms; equalibrium equations.
Fast computing global structural balance in signed networks based on memetic algorithm
NASA Astrophysics Data System (ADS)
Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng
2014-12-01
Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.
A New Computer Algorithm for Simultaneous Test Construction of Two-Stage and Multistage Testing.
ERIC Educational Resources Information Center
Wu, Ing-Long
2001-01-01
Presents two binary programming models with a special network structure that can be explored computationally for simultaneous test construction. Uses an efficient special purpose network algorithm to solve these models. An empirical study illustrates the approach. (SLD)
NASA Technical Reports Server (NTRS)
Neal, L.
1981-01-01
A simple numerical algorithm was developed for use in computer simulations of systems which are both stiff and stable. The method is implemented in subroutine form and applied to the simulation of physiological systems.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
A robust multi-grid pressure-based algorithm for multi-fluid flow at all speeds
NASA Astrophysics Data System (ADS)
Darwish, M.; Moukalled, F.; Sekar, B.
2003-04-01
This paper reports on the implementation and testing, within a full non-linear multi-grid environment, of a new pressure-based algorithm for the prediction of multi-fluid flow at all speeds. The algorithm is part of the mass conservation-based algorithms (MCBA) group in which the pressure correction equation is derived from overall mass conservation. The performance of the new method is assessed by solving a series of two-dimensional two-fluid flow test problems varying from turbulent low Mach number to supersonic flows, and from very low to high fluid density ratios. Solutions are generated for several grid sizes using the single grid (SG), the prolongation grid (PG), and the full non-linear multi-grid (FMG) methods. The main outcomes of this study are: (i) a clear demonstration of the ability of the FMG method to tackle the added non-linearity of multi-fluid flows, which is manifested through the performance jump observed when using the non-linear multi-grid approach as compared to the SG and PG methods; (ii) the extension of the FMG method to predict turbulent multi-fluid flows at all speeds. The convergence history plots and CPU-times presented indicate that the FMG method is far more efficient than the PG method and accelerates the convergence rate over the SG method, for the problems solved and the grids used, by a factor reaching a value as high as 15.
Fiantini, Rosalina; Umar, Efrizon
2010-06-22
Common energy crisis has modified the national energy policy which is in the beginning based on natural resources becoming based on technology, therefore the capability to understanding the basic and applied science is needed to supporting those policies. National energy policy which aims at new energy exploitation, such as nuclear energy is including many efforts to increase the safety reactor core condition and optimize the related aspects and the ability to build new research reactor with properly design. The previous analysis of the modification TRIGA 2000 Reactor design indicates that forced convection of the primary coolant system put on an effect to the flow characteristic in the reactor core, but relatively insignificant effect to the flow velocity in the reactor core. In this analysis, the lid of reactor core is closed. However the forced convection effect is still presented. This analysis shows the fluid flow velocity vector in the model area without exception. Result of this analysis indicates that in the original design of TRIGA 2000 reactor, there is still forced convection effects occur but less than in the modified TRIGA 2000 design.
Embedded assessment algorithms within home-based cognitive computer game exercises for elders.
Jimison, Holly; Pavel, Misha
2006-01-01
With the recent consumer interest in computer-based activities designed to improve cognitive performance, there is a growing need for scientific assessment algorithms to validate the potential contributions of cognitive exercises. In this paper, we present a novel methodology for incorporating dynamic cognitive assessment algorithms within computer games designed to enhance cognitive performance. We describe how this approach works for variety of computer applications and describe cognitive monitoring results for one of the computer game exercises. The real-time cognitive assessments also provide a control signal for adapting the difficulty of the game exercises and providing tailored help for elders of varying abilities.
A simple algorithm for computing positively weighted straight skeletons of monotone polygons.
Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter
2015-02-01
We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul
2012-01-01
This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…
A new fast algorithm for computing a complex number: Theoretic transforms
NASA Technical Reports Server (NTRS)
Reed, I. S.; Liu, K. Y.; Truong, T. K.
1977-01-01
A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.
CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W
2012-06-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm.
1984-06-06
Iterative ReusinUnfodin * Algorithms And Computer Codes to Find More Apropriate Neutron Spectra L A. LOWRY AND T. L. JOHNSON Healt Plvwlcs S June 6, 1984...Classification) Modifications to Iterative Recursion Unfolding Algorithms and Computer Codes to Find More Appropriate Neutron Spectra 18. SUBJECT TERMS... TO FIND MORE APPROPRIATE NEUTRON SPECTRA INTRODUCTION The unfolding of neutron spectra using data from activation foils, Bonner spheres, or other
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel
The Reliability of Diagnoses by Technician, Computer, and Algorithm.
ERIC Educational Resources Information Center
Johnson, James H.; And Others
1980-01-01
Describes a computer assisted system for intake assessment. Reports on two experiments that compared the reliability of a diagnostic procedure that involves technicians, a structured interview schedule, and a computerized diagnostic program with diagnoses made by clinicians. Results show the computer assisted technician approach is as reliable as…
Simple and Effective Algorithms: Computer-Adaptive Testing.
ERIC Educational Resources Information Center
Linacre, John Michael
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Timing formulas for dissection algorithms on vector computers
NASA Technical Reports Server (NTRS)
Poole, W. G., Jr.
1977-01-01
The use of the finite element and finite difference methods often leads to the problem of solving large, sparse, positive definite systems of linear equations. MACSYMA plays a major role in the generation of formulas representing the time required for execution of the dissection algorithms. The use of MACSYMA in the generation of those formulas is described.
NASA Computational Fluid Dynamics Conference. Volume 1: Sessions 1-6
NASA Technical Reports Server (NTRS)
1989-01-01
Presentations given at the NASA Computational Fluid Dynamics (CFD) Conference held at the NASA Ames Research Center, Moffett Field, California, March 7-9, 1989 are given. Topics covered include research facility overviews of CFD research and applications, validation programs, direct simulation of compressible turbulence, turbulence modeling, advances in Runge-Kutta schemes for solving 3-D Navier-Stokes equations, grid generation and invicid flow computation around aircraft geometries, numerical simulation of rotorcraft, and viscous drag prediction for rotor blades.
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
Damage Mechanics of Composite Materials: Constitutive Modeling and Computational Algorithms
1991-04-21
la Rupture par Endommagement", J. de Mech. Applique, Vol. 2, pp. 317-365. 23. LEMAITRE, J. AND J. L. CHABOCHE, (1985), Mechanique des Materiaux ...microcracks. Batchelor (1970), Batchelor and Green (1972), and Hinch (1977) applied this approach to the study of fluid suspensions within the framework of...3-D Statistical Micromechanical Theory 78 111.7. References 1. BATCHELOR, G. K., (1970), "The stress system in a suspension of force-free particles
Kok Yan Chan, G.; Sclavounos, P. D.; Jonkman, J.; Hayman, G.
2015-04-02
A hydrodynamics computer module was developed for the evaluation of the linear and nonlinear loads on floating wind turbines using a new fluid-impulse formulation for coupling with the FAST program. The recently developed formulation allows the computation of linear and nonlinear loads on floating bodies in the time domain and avoids the computationally intensive evaluation of temporal and nonlinear free-surface problems and efficient methods are derived for its computation. The body instantaneous wetted surface is approximated by a panel mesh and the discretization of the free surface is circumvented by using the Green function. The evaluation of the nonlinear loads is based on explicit expressions derived by the fluid-impulse theory, which can be computed efficiently. Computations are presented of the linear and nonlinear loads on the MIT/NREL tension-leg platform. Comparisons were carried out with frequency-domain linear and second-order methods. Emphasis was placed on modeling accuracy of the magnitude of nonlinear low- and high-frequency wave loads in a sea state. Although fluid-impulse theory is applied to floating wind turbines in this paper, the theory is applicable to other offshore platforms as well.
SALE-3D: a simplified ALE computer program for calculating three-dimensional fluid flow
Amsden, A.A.; Ruppel, H.M.
1981-11-01
This report presents a simplified numerical fluid-dynamics computing technique for calculating time-dependent flows in three dimensions. An implicit treatment of the pressure equation permits calculation of flows far subsonic without stringent constraints on the time step. In addition, the grid vertices may be moved with the fluid in Lagrangian fashion or held fixed in an Eulerian manner, or moved in some prescribed manner to give a continuous rezoning capability. This report describes the combination of Implicit Continuous-fluid Eulerian (ICE) and Arbitrary Lagrangian-Eulerian (ALE) to form the ICEd-ALE technique in the framework of the Simplified-ALE (SALE-3D) computer program, for which a general flow diagram and complete FORTRAN listing are included. Sample problems show how to modify the code for a variety of applications. SALE-3D is patterned as closely as possible on the previously reported two-dimensional SALE program.
On current aspects of finite element computational fluid mechanics for turbulent flows
NASA Technical Reports Server (NTRS)
Baker, A. J.
1982-01-01
A set of nonlinear partial differential equations suitable for the description of a class of turbulent three-dimensional flow fields in select geometries is identified. On the basis of the concept of enforcing a penalty constraint to ensure accurate accounting of ordering effects, a finite element numerical solution algorithm is established for the equation set and the theoretical aspects of accuracy, convergence and stability are identified and quantized. Hypermatrix constructions are used to formulate the reduction of the computational aspects of the theory to practice. The robustness of the algorithm, and the computer program embodiment, have been verified for pertinent flow configurations.
REMOVAL OF TANK AND SEWER SEDIMENT BY GATE FLUSHING: COMPUTATIONAL FLUID DYNAMICS MODEL STUDIES
This presentation will discuss the application of a computational fluid dynamics 3D flow model to simulate gate flushing for removing tank/sewer sediments. The physical model of the flushing device was a tank fabricated and installed at the head-end of a hydraulic flume. The fl...
A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS
This paper discusses a framework for fine-scale CFD modeling that may be developed to complement the present Community Multi-scale Air Quality (CMAQ) modeling system which itself is a computational fluid dynamics model. A goal of this presentation is to stimulate discussions on w...
Three-dimensional Computational Fluid Dynamics Investigation of a Spinning Helicopter Slung Load
NASA Technical Reports Server (NTRS)
Theorn, J. N.; Duque, E. P. N.; Cicolani, L.; Halsey, R.
2005-01-01
After performing steady-state Computational Fluid Dynamics (CFD) calculations using OVERFLOW to validate the CFD method against static wind-tunnel data of a box-shaped cargo container, the same setup was used to investigate unsteady flow with a moving body. Results were compared to flight test data previously collected in which the container is spinning.
NASA Astrophysics Data System (ADS)
Cherunova, I.; Kornev, N.; Jacobi, G.; Treshchun, I.; Gross, A.; Turnow, J.; Schreier, S.; Paschen, M.
2014-07-01
Three examples of use of computational fluid dynamics for designing clothing protecting a human body from high and low temperatures with an incident air fl ow and without it are presented. The internal thermodynamics of a human body and the interaction of it with the surroundings were investigated. The inner and outer problems were considered separately with their own boundary conditions.
77 FR 64834 - Computational Fluid Dynamics Best Practice Guidelines for Dry Cask Applications
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-23
... COMMISSION Computational Fluid Dynamics Best Practice Guidelines for Dry Cask Applications AGENCY: Nuclear... Dynamics Best Practice Guidelines for Dry Cask Applications.'' The draft NUREG-2152 report provides best... can be controlled and quantified by the user are then discussed in detail, and best...
NASA Technical Reports Server (NTRS)
Ziebarth, John P.; Meyer, Doug
1992-01-01
The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.
Mesh and Time-Step Independent Computational Fluid Dynamics (CFD) Solutions
ERIC Educational Resources Information Center
Nijdam, Justin J.
2013-01-01
A homework assignment is outlined in which students learn Computational Fluid Dynamics (CFD) concepts of discretization, numerical stability and accuracy, and verification in a hands-on manner by solving physically realistic problems of practical interest to engineers. The students solve a transient-diffusion problem numerically using the common…
Computer simulation studies in fluid and calcium regulation and orthostatic intolerance
NASA Technical Reports Server (NTRS)
1985-01-01
The systems analysis approach to physiological research uses mathematical models and computer simulation. Major areas of concern during prolonged space flight discussed include fluid and blood volume regulation; cardiovascular response during shuttle reentry; countermeasures for orthostatic intolerance; and calcium regulation and bone atrophy. Potential contributions of physiologic math models to future flight experiments are examined.
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Spacecraft components may be damaged due to airflow produced by Environmental Control Systems (ECS). There are uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field around a spacecraft from the ECS System. This paper describes an approach to estimate the uncertainty in using CFD to predict the airflow speeds around an encapsulated spacecraft.
An Innovative Improvement of Engineering Learning System Using Computational Fluid Dynamics Concept
ERIC Educational Resources Information Center
Hung, T. C.; Wang, S. K.; Tai, S. W.; Hung, C. T.
2007-01-01
An innovative concept of an electronic learning system has been established in an attempt to achieve a technology that provides engineering students with an instructive and affordable framework for learning engineering-related courses. This system utilizes an existing Computational Fluid Dynamics (CFD) package, Active Server Pages programming,…
Markov Algorithms for Computing the Reliability of Staged Networks.
1986-04-01
on an IBM Personal Computer AT, to calculated Pst for the dodecahedron network of Fig. 3 and the grid network of Fig. 4. The computation time was 41/2...network used in [1], which is in effect a dodecahedron reduced by 3 nodes and 5 arcs, Bailey and Kulkarni report timings of 54 minutes, 8 minutes and...staging reduced the computing time, from the 52 minutes quoted previously, to 1 minute 58 seconds. A similar use of overlapping stages for the dodecahedron
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Fast one-pass algorithm to label objects and compute their features
NASA Astrophysics Data System (ADS)
Thai, Tan Q.
1991-12-01
In many image processing applications, labeling objects and computing their features for recognition are crucial steps for further analysis. In general these two steps are done separately. This paper proposes a new approach to label all objects and computer their features (such as moments, best fit ellipse, major and minor axis) in one pass. The basic idea of the algorithm is to detect interval overlaps among the line segments as the image is scanned from left to right, top to bottom. Ambiguity about an object's connectivity can also be resolved with the proposed algorithm. It is a fast algorithm and can be implemented on either serial or parallel processors.
Testing the race model inequality: an algorithm and computer programs.
Ulrich, Rolf; Miller, Jeff; Schröter, Hannes
2007-05-01
In divided-attention tasks, responses are faster when two target stimuli are presented, and thus one is redundant, than when only a single target stimulus is presented. Raab (1962) suggested an account of this redundant-targets effect in terms of a race model in which the response to redundant target stimuli is initiated by the faster of two separate target detection processes. Such models make a prediction about the probability distributions of reaction times that is often called the race model inequality, and it is often of interest to test this prediction. In this article, we describe a precise algorithm that can be used to test the race model inequality and present MATLAB routines and a Pascal program that implement this algorithm.
An optimal algorithm for computing all subtree repeats in trees
Flouri, T.; Kobert, K.; Pissis, S. P.; Stamatakis, A.
2014-01-01
Given a labelled tree T, our goal is to group repeating subtrees of T into equivalence classes with respect to their topologies and the node labels. We present an explicit, simple and time-optimal algorithm for solving this problem for unrooted unordered labelled trees and show that the running time of our method is linear with respect to the size of T. By unordered, we mean that the order of the adjacent nodes (children/neighbours) of any node of T is irrelevant. An unrooted tree T does not have a node that is designated as root and can also be referred to as an undirected tree. We show how the presented algorithm can easily be modified to operate on trees that do not satisfy some or any of the aforementioned assumptions on the tree structure; for instance, how it can be applied to rooted, ordered or unlabelled trees. PMID:24751873
Embedded diagonally implicit Runge-Kutta algorithms on parallel computers
NASA Astrophysics Data System (ADS)
van der Houwen, P. J.; Sommeijer, B. P.; Couzy, W.
1992-01-01
This paper investigates diagonally implicit Runge-Kutta methods in which the implicit relations can be solved in parallel and are singly diagonal-implicit on each processor. The algorithms are based on diagonally implicit iteration of fully implicit Runge-Kutta methods of high order. The iteration scheme is chosen in such a way that the resulting algorithm is A(α ) -stable or L(α ) -stable with α equal or very close to π /2 . In this way, highly stable, singly diagonal-implicit Runge-Kutta methods of orders up to 10 can be constructed. Because of the iterative nature of the methods, embedded formulas of lower orders are automatically available, allowing a strategy for step and order variation.
Non-Algorithmic Issues in Automated Computational Mechanics
1991-04-30
Discretization of the Model ............................. 16 * 3.4 Selection of Computational Methods and Strategies .............. 16 3.5 Numerical Analysis of...90 8.3 Automated Strategy Selection and Performance Monitoring ........... 92 8.3.1 Selection of Computational Methods .................... 92 8.3.2...Knowledge Bases for Coupled PHLEX-NEXPERT Environ- ment 160 0.1 Strategy Selection Knowledge Base.......................... 160 0.2 Performance Control
A computational algorithm for crack determination: The multiple crack case
NASA Technical Reports Server (NTRS)
Bryan, Kurt; Vogelius, Michael
1992-01-01
An algorithm for recovering a collection of linear cracks in a homogeneous electrical conductor from boundary measurements of voltages induced by specified current fluxes is developed. The technique is a variation of Newton's method and is based on taking weighted averages of the boundary data. The method also adaptively changes the applied current flux at each iteration to maintain maximum sensitivity to the estimated locations of the cracks.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer
NASA Technical Reports Server (NTRS)
Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw
2000-01-01
Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.
The Center for Computational Sciences and Engineering (CCSE) develops and applies advanced computational methodologies to solve large-scale scientific and engineering problems arising in the Department of Energy (DOE) mission areas involving energy, environmental, and industrial technology. The primary focus is in the application of structured-grid finite difference methods on adaptive grid hierarchies for compressible, incompressible, and low Mach number flows. The diverse range of scientific applications that drive the research typically involve a large range of spatial and temporal scales (e.g. turbulent reacting flows) and require the use of extremely large computing hardware, such as the 153,000-core computer, Hopper, at NERSC. The CCSE approach to these problems centers on the development and application of advanced algorithms that exploit known separations in scale; for many of the application areas this results in algorithms are several orders of magnitude more efficient than traditional simulation approaches.
[A fast non-local means algorithm for denoising of computed tomography images].
Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong
2012-11-01
A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.
1993-01-01
The June 1992 to May 1993 grant NCC-2-677 provided for the continued demonstration of Computational Fluid Dynamics (CFD) as applied to the Stratospheric Observatory for Infrared Astronomy (SOFIA). While earlier grant years allowed validation of CFD through comparison against experiments, this year a new design proposal was evaluated. The new configuration would place the cavity aft of the wing, as opposed to the earlier baseline which was located immediately aft of the cockpit. This aft cavity placement allows for simplified structural and aircraft modification requirements, thus lowering the program cost of this national astronomy resource. Three appendices concerning this subject are presented.
Noise filtering algorithm for the MFTF-B computer based control system
Minor, E.G.
1983-11-30
An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions.
He, Lifeng; Chao, Yuyan
2015-09-01
Labeling connected components and calculating the Euler number in a binary image are two fundamental processes for computer vision and pattern recognition. This paper presents an ingenious method for identifying a hole in a binary image in the first scan of connected-component labeling. Our algorithm can perform connected component labeling and Euler number computing simultaneously, and it can also calculate the connected component (object) number and the hole number efficiently. The additional cost for calculating the hole number is only O(H) , where H is the hole number in the image. Our algorithm can be implemented almost in the same way as a conventional equivalent-label-set-based connected-component labeling algorithm. We prove the correctness of our algorithm and use experimental results for various kinds of images to demonstrate the power of our algorithm.
Ott, Daniel; Thompson, Robert; Song, Junfeng
2017-02-01
In order for a crime laboratory to assess a firearms examiner's training, skills, experience, and aptitude, it is necessary for the examiner to participate in proficiency testing. As computer algorithms for comparisons of pattern evidence become more prevalent, it is of interest to test algorithm performance as well, using these same proficiency examinations. This article demonstrates the use of the Congruent Matching Cell (CMC) algorithm to compare 3D topography measurements of breech face impressions and firing pin impressions from a previously distributed firearms proficiency test. In addition, the algorithm is used to analyze the distribution of many comparisons from a collection of cartridge cases used to construct another recent set of proficiency tests. These results are provided along with visualizations that help to relate the features used in optical comparisons by examiners to the features used by computer comparison algorithms.
Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori
2014-01-01
Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…
NASA Astrophysics Data System (ADS)
Olivas-Martinez, Miguel; Sohn, Hong Yong; Jang, Hee Dong; Rhee, Kang-In
2015-07-01
A computational fluid dynamic model that couples the fluid dynamics with various processes involving precursor droplets and product particles during the flame spray pyrolysis (FSP) synthesis of silica nanopowder from volatile precursors is presented. The synthesis of silica nanopowder from tetraethylorthosilicate and tetramethylorthosilicate in bench- and pilot-scale FSP reactors, with the ultimate purpose of industrial-scale production, was simulated. The transport and evaporation of liquid droplets are simulated from the Lagrangian viewpoint. The quadrature method of moments is used to solve the population balance equation for particles undergoing homogeneous nucleation and Brownian collision. The nucleation rate is computed based on the rates of thermal decomposition and oxidation of the precursor with no adjustable parameters. The computed results show that the model is capable of reproducing the magnitude as well as the variations of the average particle diameter with different experimental conditions using a single value of the collision efficiency factor α for a given reactor size.
FAST: A multi-processed environment for visualization of computational fluid
NASA Technical Reports Server (NTRS)
Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin; Globus, AL
1991-01-01
Three dimensional, unsteady, multizoned fluid dynamics simulations over full scale aircraft is typical of problems being computed at NASA-Ames on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10 to 30 Mflop range, it is felt that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These large, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this time. These visualization techniques will change as the supercomputing environment, and hence the scientific methods used, evolve ever further. Visualization of computational aerodynamics require flexible, extensible, and adaptable software tools for performing analysis tasks. FAST (Flow Analysis Software Toolkit), an implementation of a software system for fluid mechanics analysis that is based on this approach is discussed.
Algorithmic Mechanisms for Reliable Crowdsourcing Computation under Collusion
Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Pareja, Daniel
2015-01-01
We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers’ decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game. PMID:25793524
Algorithmic mechanisms for reliable crowdsourcing computation under collusion.
Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel
2015-01-01
We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.
Fast parallel algorithms that compute transitive closure of a fuzzy relation
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.
1993-01-01
The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).
A subspace preconditioning algorithm for eigenvector/eigenvalue computation
Bramble, J.H.; Knyazev, A.V.; Pasciak, J.E.
1996-12-31
We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigen-spaces of a symmetric positive definite matrix. In our applications, the dimension of a matrix is large and the cost of its inverting is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning. Estimates will be provided which show that the preconditioned method converges linearly and uniformly in the matrix dimension when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors.
Efficient computer algorithms for infrared astronomy data processing
NASA Technical Reports Server (NTRS)
Pelzmann, R. F., Jr.
1976-01-01
Data processing techniques to be studied for use in infrared astronomy data analysis systems are outlined. Only data from space based telescope systems operating as survey instruments are considered. Resulting algorithms, and in some cases specific software, will be applicable for use with the infrared astronomy satellite (IRAS) and the shuttle infrared telescope facility (SIRTF). Operational tests made during the investigation use data from the celestial mapping program (CMP). The overall task differs from that involved in ground-based infrared telescope data reduction.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
Athans, M. (Editor); Willsky, A. S. (Editor)
1982-01-01
The analysis and design of complex multivariable reliable control systems are considered. High performance and fault tolerant aircraft systems are the objectives. A preliminary feasibility study of the design of a lateral control system for a VTOL aircraft that is to land on a DD963 class destroyer under high sea state conditions is provided. Progress in the following areas is summarized: (1) VTOL control system design studies; (2) robust multivariable control system synthesis; (3) adaptive control systems; (4) failure detection algorithms; and (5) fault tolerant optimal control theory.
Dynamic programming and graph algorithms in computer vision.
Felzenszwalb, Pedro F; Zabih, Ramin
2011-04-01
Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.
Technology Transfer Automated Retrieval System (TEKTRAN)
Computer simulation is a useful tool for benchmarking the electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature short time (HTST) pasteurization was extended to include models for pr...
ERIC Educational Resources Information Center
Dershem, Herbert L.
These modules view aspects of computer use in the problem-solving process, and introduce techniques and ideas that are applicable to other modes of problem solving. The first unit looks at algorithms, flowchart language, and problem-solving steps that apply this knowledge. The second unit describes ways in which computer iteration may be used…
Boman, Erik G.; Catalyurek, Umit V.; Chevalier, Cedric; Devine, Karen D.; Gebremedhin, Assefaw H.; Hovland, Paul D.; Pothen, Alex; Rajamanickam, Sivasankaran; Safro, Ilya; Wolf, Michael M.; Zhou, Min
2015-01-16
This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hypergraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellows have joined DOE Labs (Sandia, Berkeley), as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.
A Computer Algorithm from DeMoivre's Theorem.
ERIC Educational Resources Information Center
Boyd, James N.
1982-01-01
Details are given of a simple computer program written in BASIC which calculates the sine of an angle through an application of DeMoivre's Theorem. The program is included in the material, and the program's success is discussed in terms of why the approximation works. (MP)
Algorithmic Tools and Computational Frameworks for Cell Informatics
2006-04-01
Various Biological Systems ................................................................... 10 C . elegans Gonad Tract Cells Simulations...context, several experiments on the nematode C . elegans were conducted in cooperation with colleagues in the NYU Department of Biology, in order to test...proliferation. No animal research was conducted under this project. To this end, a rigorous computational model of C . elegans germ line stem cell growth
Chandrasekhar equations and computational algorithms for distributed parameter systems
NASA Technical Reports Server (NTRS)
Burns, J. A.; Ito, K.; Powers, R. K.
1984-01-01
The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Srinivasan, R.; Charles, J. B.
1992-01-01
Fluid shifts in weightlessness may cause a central volume expansion, activating reflexes to reduce the blood volume. Computer simulation was used to test the hypothesis that preadaptation of the blood volume prior to exposure to weightlessness could counteract the central volume expansion due to fluid shifts and thereby attenuate the circulatory and renal responses resulting in large losses of fluid from body water compartments. The Guyton Model of Fluid, Electrolyte, and Circulatory Regulation was modified to simulate the six degree head down tilt that is frequently use as an experimental analog of weightlessness in bedrest studies. Simulation results show that preadaptation of the blood volume by a procedure resembling a blood donation immediately before head down bedrest is beneficial in damping the physiologic responses to fluid shifts and reducing body fluid losses. After ten hours of head down tilt, blood volume after preadaptation is higher than control for 20 to 30 days of bedrest. Preadaptation also produces potentially beneficial higher extracellular volume and total body water for 20 to 30 days of bedrest.
A computational algorithm to predict shRNA potency
Marran, Krista; Zhou, Xin; Gordon, Assaf; Demerdash, Osama El; Wagenblast, Elvin; Kim, Sun; Fellmann, Christof; Hannon, Gregory J.
2014-01-01
The strength of conclusions drawn from RNAi-based studies is heavily influenced by the quality of tools used to elicit knockdown. Prior studies have developed algorithms to design siRNAs. However, to date, no established method has emerged to identify effective shRNAs, which have lower intracellular abundance than transfected siRNAs and undergo additional processing steps. We recently developed a multiplexed assay for identifying potent shRNAs and have used this method to generate ~250,000 shRNA efficacy data-points. Using these data, we developed shERWOOD, an algorithm capable of predicting, for any shRNA, the likelihood that it will elicit potent target knockdown. Combined with additional shRNA design strategies, shERWOOD allows the ab initio identification of potent shRNAs that target, specifically, the majority of each gene’s multiple transcripts. We have validated the performance of our shRNA designs using several orthogonal strategies and have constructed genome-wide collections of shRNAs for humans and mice based upon our approach. PMID:25435137
Wiputra, Hadi; Lai, Chang Quan; Lim, Guat Ling; Heng, Joel Jia Wei; Guo, Lan; Soomar, Sanah Merchant; Leo, Hwa Liang; Biwas, Arijit; Mattar, Citra Nurfarah Zaini; Yap, Choon Hwai
2016-12-01
There are 0.6-1.9% of US children who were born with congenital heart malformations. Clinical and animal studies suggest that abnormal blood flow forces might play a role in causing these malformation, highlighting the importance of understanding the fetal cardiovascular fluid mechanics. We performed computational fluid dynamics simulations of the right ventricles, based on four-dimensional ultrasound scans of three 20-wk-old normal human fetuses, to characterize their flow and energy dynamics. Peak intraventricular pressure gradients were found to be 0.2-0.9 mmHg during systole, and 0.1-0.2 mmHg during diastole. Diastolic wall shear stresses were found to be around 1 Pa, which could elevate to 2-4 Pa during systole in the outflow tract. Fetal right ventricles have complex flow patterns featuring two interacting diastolic vortex rings, formed during diastolic E wave and A wave. These rings persisted through the end of systole and elevated wall shear stresses in their proximity. They were observed to conserve ∼25.0% of peak diastolic kinetic energy to be carried over into the subsequent systole. However, this carried-over kinetic energy did not significantly alter the work done by the heart for ejection. Thus, while diastolic vortexes played a significant role in determining spatial patterns and magnitudes of diastolic wall shear stresses, they did not have significant influence on systolic ejection. Our results can serve as a baseline for future comparison with diseased hearts.
NASA Astrophysics Data System (ADS)
Wang, Deguang; Han, Baochang; Huang, Ming
Computer forensics is the technology of applying computer technology to access, investigate and analysis the evidence of computer crime. It mainly include the process of determine and obtain digital evidence, analyze and take data, file and submit result. And the data analysis is the key link of computer forensics. As the complexity of real data and the characteristics of fuzzy, evidence analysis has been difficult to obtain the desired results. This paper applies fuzzy c-means clustering algorithm based on particle swarm optimization (FCMP) in computer forensics, and it can be more satisfactory results.
Belič, Aleš; Pompon, Denis; Monostory, Katalin; Kelly, Diane; Kelly, Steven; Rozman, Damjana
2013-06-01
Alternative pathways of metabolic networks represent the escape routes that can reduce drug efficacy and can cause severe adverse effects. In this paper we introduce a mathematical algorithm and a coding system for rapid computational construction of metabolic networks. The initial data for the algorithm are the source substrate code and the enzyme/metabolite interaction tables. The major strength of the algorithm is the adaptive coding system of the enzyme-substrate interactions. A reverse application of the algorithm is also possible, when optimisation algorithm is used to compute the enzyme/metabolite rules from the reference network structure. The coding system is user-defined and must be adapted to the studied problem. The algorithm is most effective for computation of networks that consist of metabolites with similar molecular structures. The computation of the cholesterol biosynthesis metabolic network suggests that 89 intermediates can theoretically be formed between lanosterol and cholesterol, only 20 are presently considered as cholesterol intermediates. Alternative metabolites may represent links with other metabolic networks both as precursors and metabolites of cholesterol. A possible cholesterol-by-pass pathway to bile acids metabolism through cholestanol is suggested.
Creating Computable Algorithms for Symptom Management in an Outpatient Thoracic Oncology Setting
Cooley, Mary E.; Lobach, David F.; Johns, Ellis; Halpenny, Barbara; Saunders, Toni-Ann; Del Fiol, Guilherme; Rabin, Michael S.; Calarese, Pamela; Berenbaum, Isidore L.; Zaner, Ken; Finn, Kathleen; Berry, Donna L.; Abrahm, Janet L.
2013-01-01
Context Adequate symptom management is essential to ensure quality cancer care, but symptom management is not always evidence based. Adapting and automating national guidelines for use at the point of care may enhance use by clinicians. Objectives This article reports on a process of adapting research evidence for use in a clinical decision support system that provided individualized symptom management recommendations to clinicians at the point of care. Methods Using a modified ADAPTE process, panels of local experts adapted national guidelines and integrated research evidence to create computable algorithms with explicit recommendations for management of the most common symptoms (pain, fatigue, dyspnea, depression, and anxiety) associated with lung cancer. Results Small multidisciplinary groups and a consensus panel, using a nominal group technique, modified and subsequently approved computable algorithms for fatigue, dyspnea, moderate pain, severe pain, depression, and anxiety. The approved algorithms represented the consensus of multidisciplinary clinicians on pharmacological and behavioral interventions tailored to the patient’s age, comorbidities, laboratory values, current medications, and patient-reported symptom severity. Algorithms also were reconciled with one another to enable simultaneous management of several symptoms. Conclusion A modified ADAPTE process and nominal group technique enabled the development and approval of locally adapted computable algorithms for individualized symptom management in patients with lung cancer. The process was more complex and required more time and resources than initially anticipated, but it resulted in computable algorithms that represented the consensus of many experts. PMID:23680580
Computational Fluid Dynamics at ICMA (Institute for Computational Mathematics and Applications)
1988-10-18
the numeri- cal integration of the reduced basis ODE systems. 3. Two-Fluid, Two-Phase Flow Additional theoretical results on the nature of the void...space of all functions which are square integrable and HI’, I denote the usual Sobolev spaces. To approximate the solution to the weak problem (2...Pittsburgh, InsL for Comp. Math. and Appl., Technical Report ICMA-82-38, May 1982. Translated into Spanish and appeared in Metodos Numericos Para Calculo y
Towards a generalized computational fluid dynamics technique for all Mach numbers
NASA Technical Reports Server (NTRS)
Walters, R. W.; Slack, D. C.; Godfrey, A. G.
1993-01-01
flux formulae. In addition, we improved the convergence rate of the implicit time integration schemes in GASP through the use of inner iteration strategies and the use of the GMRES (General Minimized Resisual) which belongs to the class of algorithms referred to as Krylov subspace iteration. Finally, we significantly improved the practical utility of GASP through the addition of mesh sequencing, a technique in which computations begin on a coarse grid and get interpolated onto successively finer grids. The fluid dynamic problems of interest to the propulsion community involve complex flow physics spanning different velocity regimes and possibly involving chemical reactions. This class of problems results in widely disparate time scales causing numerical stiffness. Even in the absence of chemical reactions, eigenvalue stiffness manifests itself at transonic and very low speed flows which can be quantified by the large condition number of the system and evidenced by slow convergence rates. This results in the need for thorough numerical analysis and subsequent implementation of sophisticated numerical techniques for these difficult yet practical problems. As a result of this work, we have been able to extend the range of applicability of compressible codes to very low speed inviscid flows (M = .001) and reacting flows.
Computational fluid dynamics evaluation of flow reversal treatment of giant basilar tip aneurysm.
Alnæs, Martin Sandve; Mardal, Kent-Andre; Bakke, Søren; Sorteberg, Angelika
2015-10-01
Therapeutic parent artery flow reversal is a treatment option for giant, partially thrombosed basilar tip aneurysms. The effectiveness of this treatment has been variable and not yet studied by applying computational fluid dynamics. Computed tomography images and blood flow velocities acquired with transcranial Doppler ultrasonography were obtained prior to and after bilateral endovascular vertebral artery occlusion for a giant basilar tip aneurysm. Patient-specific geometries and velocity waveforms were used in computational fluid dynamics simulations in order to determine the velocity and wall shear stress changes induced by treatment. Therapeutic parent artery flow reversal lead to a dramatic increase in aneurysm inflow and wall shear stress (30 to 170 Pa) resulting in an increase in intra-aneurysmal circulation. The enlargement of the circulated area within the aneurysm led to a re-normalization of the wall shear stress and the aneurysm remained stable for more than 8 years thereafter. Therapeutic parent artery flow reversal can lead to unintended, potentially harmful changes in aneurysm inflow which can be quantified and possibly predicted by applying computational fluid dynamics.
Investigation of Swirling Flow in Rod Bundle Subchannels Using Computational Fluid Dynamics
Holloway, Mary V.; Beasley, Donald E.; Conner, Michael E.
2006-07-01
The fluid dynamics for turbulent flow through rod bundles representative of those used in pressurized water reactors is examined using computational fluid dynamics (CFD). The rod bundles of the pressurized water reactor examined in this study consist of a square array of parallel rods that are held on a constant pitch by support grids spaced axially along the rod bundle. Split-vane pair support grids are often used to create swirling flow in the rod bundle in an effort to improve the heat transfer characteristics for the rod bundle during both normal operating conditions and in accident condition scenarios. Computational fluid dynamics simulations for a two subchannel portion of the rod bundle were used to model the flow downstream of a split-vane pair support grid. A high quality computational mesh was used to investigate the choice of turbulence model appropriate for the complex swirling flow in the rod bundle subchannels. Results document a central swirling flow structure in each of the subchannels downstream of the split-vane pairs. Strong lateral flows along the surface of the rods, as well as impingement regions of lateral flow on the rods are documented. In addition, regions of lateral flow separation and low axial velocity are documented next to the rods. Results of the CFD are compared to experimental particle image velocimetry (PIV) measurements documenting the lateral flow structures downstream of the split-vane pairs. Good agreement is found between the computational simulation and experimental measurements for locations close to the support grid. (authors)
Parallel Algorithms for Computer Vision on the Connection Machine.
1986-11-01
INTELLIGENCE LAB J J LITTLE UNCLASSIFIED NOV 86 AI-M-928 DRCA76-85-C-0818 F/G 12/7 NL EmlolEllllllEIIIIIIEIIEIIIE El ..... 9’-2 4 2. 0 ~~1.8 .22 -C% .1...connect ed cornpontoit label ii ig (; ii’, - Bl~~~ ello ( h explained the use of se 1’irig. osltdoiayo i o tii - and devise~d the NIST algorithm), Mike 1...tinliated """ Edge (hetec t ion . Convolution 3ms 2rus . Find Zero-Crossings 0.5ms (.57, I Propagate lal) el 36ms 3r)ins * La urricrate c u rves 350ps
A computational algorithm for spacecraft control and momentum management
NASA Technical Reports Server (NTRS)
Dzielski, John; Bergmann, Edward; Paradiso, Joseph
1990-01-01
Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
A computer algorithm for performing interactive algebraic computation on the GE Image-100 system
NASA Technical Reports Server (NTRS)
Hart, W. D.; Kim, H. H.
1979-01-01
A subroutine which performs specialized algebraic computations upon ocean color scanner multispectral data is presented. The computed results are displayed on a video display. The subroutine exists as a component of the aircraft sensor analysis package. The user specifies the parameters of the computations by directly interacting with the computer. A description of the conversational options is also given.