Science.gov

Sample records for element computational aspects

  1. On current aspects of finite element computational fluid mechanics for turbulent flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1982-01-01

    A set of nonlinear partial differential equations suitable for the description of a class of turbulent three-dimensional flow fields in select geometries is identified. On the basis of the concept of enforcing a penalty constraint to ensure accurate accounting of ordering effects, a finite element numerical solution algorithm is established for the equation set and the theoretical aspects of accuracy, convergence and stability are identified and quantized. Hypermatrix constructions are used to formulate the reduction of the computational aspects of the theory to practice. The robustness of the algorithm, and the computer program embodiment, have been verified for pertinent flow configurations.

  2. Adaptive finite elements with high aspect ratio for the computation of coalescence using a phase-field model

    NASA Astrophysics Data System (ADS)

    Burman, E.; Jacot, A.; Picasso, M.

    2004-03-01

    A multiphase-field model for the description of coalescence in a binary alloy is solved numerically using adaptive finite elements with high aspect ratio. The unknown of the multiphase-field model are the three phase fields (solid phase 1, solid phase 2, and liquid phase), a Lagrange multiplier and the concentration field. An Euler implicit scheme is used for time discretization, together with continuous, piecewise linear finite elements. At each time step, a linear system corresponding to the three phases plus the Lagrange multiplier has to be solved. Then, the linear system pertaining to concentration is solved. An adaptive finite element algorithm is proposed. In order to reduce the number of mesh vertices, the generated meshes contain elements with high aspect ratio. The refinement and coarsening criteria are based on an error indicator which has already been justified theoretically for simpler problems. Numerical results on two test cases show the efficiency of the method.

  3. Computational aspects of seismology

    NASA Astrophysics Data System (ADS)

    Koper, Keith David

    Recent increases in computer speed and memory have opened the door to new analytical techniques in seismology. This dissertation focuses on the application of two such techniques: finite difference simulation of wave propagation in complex media, and genetic algorithm (GA) based searching for solutions to inverse problems. The first two chapters detail the use of a 3D finite difference algorithm in modeling the P- and S-wave velocity structure of the Tonga subduction zone. The large memory capacity of modern computers permits the use of a fine spatial grid, allowing for the accurate comparison of subtly varying velocity models. I contrast the theoretical traveltimes to local data that were recorded by two temporary deployments of broadband, land stations and ocean bottom seismometers. The primary results from these studies are: (1) it is not possible to distinguish between equilibrium and metastable models of subduction with travel time data, and (2) the same mechanism accounts for the fast, slab velocity anomaly and the slow, backarc velocity anomaly under the Lau spreading center---both are consistent with temperature perturbations, indicating that the role of partial melt is insignificant. The third and fourth chapters concern the application of GAs to two kinds of seismological inverse problems. The relatively fast speed of present day CPUs allows global search methods, such as GAs, to be feasible on realistic problems. In the third chapter I compare the performance of a GA based search with those of a series of more traditional, local descent methods on the problem of inverting PKP travel times for radial, P-wave models of the Earth's core and lowermost mantle. Even though both the model parametrization and dataset are heavily smoothed, there exist significant complexities in the error landscape (due to nonlinearities in the forward calculation) that render the GA method superior. In the fourth chapter I present a variant of a traditional GA, known as a

  4. Computational aspects of multibody dynamics

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1989-01-01

    Computational aspects are addressed which impact the requirements for developing a next generation software system for flexible multibody dynamics simulation which include: criteria for selecting candidate formulation, pairing of formulations with appropriate solution procedures, need for concurrent algorithms to utilize computer hardware advances, and provisions for allowing open-ended yet modular analysis modules.

  5. Computational aspects of dispersive computational continua for elastic heterogeneous media

    NASA Astrophysics Data System (ADS)

    Fafalis, Dimitrios; Fish, Jacob

    2015-12-01

    The present manuscript focusses on computational aspects of dispersive computational continua (C^2) formulation previously introduced by the authors. The dispersive C^2 formulation is a multiscale approach that showed strikingly accurate dispersion curves. However, the seemingly theoretical advantage may be inconsequential due to tremendous computational cost involved. Unlike classical dispersive methods pioneered more than a half a century ago where the unit cell is quasi-static and provides effective mechanical and dispersive properties to the coarse-scale problem, the dispersive C^2 gives rise to transient problems at all scales and for all microphases involved. An efficient block time-integration scheme is proposed that takes advantage of the fact that the transient unit cell problems are not coupled to each other, but rather to a single coarse-scale finite element they are positioned in. We show that the computational cost of the method is comparable to the classical dispersive methods for short load durations.

  6. Elements of Computer Careers.

    ERIC Educational Resources Information Center

    Edwards, Judith B.; And Others

    This textbook is intended to provide students with an awareness of the possible alternatives in the computer field and with the background information necessary for them to evaluate those alternatives intelligently. Problem solving and simulated work experiences are emphasized as students are familiarized with the functions and limitations of…

  7. Finite element computational fluid mechanics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1983-01-01

    Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.

  8. Element-topology-independent preconditioners for parallel finite element computations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  9. Theoretical and computational aspects of seismic tomography

    NASA Astrophysics Data System (ADS)

    Alekseev, A. S.; Lavrentiev, M. M.; Romanov, V. G.; Romanov, M. E.

    1990-12-01

    This paper reviews aspects related to applications of seismic wave kinematics for the reconstruction of internal characteristics of an elastic medium. It presents the results of studying the inverse kinematic seismic problem and its linear analogue — problems of integral geometry, obtained in recent decades with an emphasis on the work done by Soviet scientists. Computational techniques of solving these problems are discussed. This review should be of interest to geophysicists studying the oceans, atmosphere and ionosphere as well as those studying the solid part of the Earth.

  10. Conceptual aspects of geometric quantum computation

    NASA Astrophysics Data System (ADS)

    Sjöqvist, Erik; Azimi Mousolou, Vahid; Canali, Carlo M.

    2016-07-01

    Geometric quantum computation is the idea that geometric phases can be used to implement quantum gates, i.e., the basic elements of the Boolean network that forms a quantum computer. Although originally thought to be limited to adiabatic evolution, controlled by slowly changing parameters, this form of quantum computation can as well be realized at high speed by using nonadiabatic schemes. Recent advances in quantum gate technology have allowed for experimental demonstrations of different types of geometric gates in adiabatic and nonadiabatic evolution. Here, we address some conceptual issues that arise in the realizations of geometric gates. We examine the appearance of dynamical phases in quantum evolution and point out that not all dynamical phases need to be compensated for in geometric quantum computation. We delineate the relation between Abelian and non-Abelian geometric gates and find an explicit physical example where the two types of gates coincide. We identify differences and similarities between adiabatic and nonadiabatic realizations of quantum computation based on non-Abelian geometric phases.

  11. Algebraic aspects of the computably enumerable degrees.

    PubMed Central

    Slaman, T A; Soare, R I

    1995-01-01

    A set A of nonnegative integers is computably enumerable (c.e.), also called recursively enumerable (r.e.), if there is a computable method to list its elements. The class of sets B which contain the same information as A under Turing computability (elements, whether every embedding of P into can be extended to an embedding of Q into R. Many of the most significant theorems giving an algebraic insight into R have asserted either extension or nonextension of embeddings. We extend and unify these results and their proofs to produce complete and complementary criteria and techniques to analyze instances of extension and nonextension. We conclude that the full extension of embedding problem is decidable. PMID:11607508

  12. Computer Security: The Human Element.

    ERIC Educational Resources Information Center

    Guynes, Carl S.; Vanacek, Michael T.

    1981-01-01

    The security and effectiveness of a computer system are dependent on the personnel involved. Improved personnel and organizational procedures can significantly reduce the potential for computer fraud. (Author/MLF)

  13. Mathematical aspects of finite element methods for incompressible viscous flows

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.

    1986-01-01

    Mathematical aspects of finite element methods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.

  14. Impact of new computing systems on finite element computations

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  15. Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Sawamiphakdi, K.

    1984-01-01

    A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.

  16. Dedicated breast computed tomography: Basic aspects

    SciTech Connect

    Sarno, Antonio; Mettivier, Giovanni Russo, Paolo

    2015-06-15

    X-ray mammography of the compressed breast is well recognized as the “gold standard” for early detection of breast cancer, but its performance is not ideal. One limitation of screening mammography is tissue superposition, particularly for dense breasts. Since 2001, several research groups in the USA and in the European Union have developed computed tomography (CT) systems with digital detector technology dedicated to x-ray imaging of the uncompressed breast (breast CT or BCT) for breast cancer screening and diagnosis. This CT technology—tracing back to initial studies in the 1970s—allows some of the limitations of mammography to be overcome, keeping the levels of radiation dose to the radiosensitive breast glandular tissue similar to that of two-view mammography for the same breast size and composition. This paper presents an evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R and D in the laboratory and/or in the clinic. The intended focus here is on the technological/engineering aspects of BCT and on outlining advantages and limitations as reported in the related literature. Prospects for future research in this field are discussed.

  17. Computational and Practical Aspects of Drug Repositioning

    PubMed Central

    Oprea, Tudor I.

    2015-01-01

    Abstract The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the “barrier to entry” is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme—drug repositioning evidence level (DREL)—for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209

  18. Computational and Practical Aspects of Drug Repositioning.

    PubMed

    Oprea, Tudor I; Overington, John P

    2015-01-01

    The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the "barrier to entry" is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme-drug repositioning evidence level (DREL)-for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209

  19. Sociocultural Aspects of Computers in Education.

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    The data reported in this paper gives depth to the picture of computers in society, in work, and in schools. The prices have dropped but computer corporations sell to schools, as they do to any other customer, to increase profits for themselves. Computerizing is a vehicle for social stratification. Computers are not easy to use and are hard to…

  20. Computational Aspects of Heat Transfer in Structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M. (Compiler)

    1982-01-01

    Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.

  1. [Fascioliasis hepatis--computed tomographic aspect].

    PubMed

    Goebel, N; Markwalder, K; Siegenthaler, W

    1984-12-01

    In a patient with liver fascioliasis (already excreting eggs with the faeces) a CT scan of the liver showed after i. v. contrast injection a relatively characteristic aspect with multiple, small, hypodense areas, partly in formations of bunches of grapes, partly in a street-like arrangement towards the portal vein - bile duct - areas. 9 months later the hypodense lesions had markedly decreased. PMID:6518725

  2. Aspects of computer vision in surgical endoscopy

    NASA Astrophysics Data System (ADS)

    Rodin, Vincent; Ayache, Alain; Berreni, N.

    1993-09-01

    This work is related to a project of medical robotics applied to surgical endoscopy, led in collaboration with Doctor Berreni from the Saint Roch nursing-home in Perpignan, France). After taking what Doctor Berreni advises, two aspects of endoscopic color image processing have been brought out: (1) The help to the diagnosis by the automatic detection of the sick areas after a learning phase. (2) The 3D reconstruction of the analyzed cavity by using a zoom.

  3. Computational aspects of Gaussian beam migration

    SciTech Connect

    Hale, D.

    1992-08-01

    The computational efficiency of Gaussian beam migration depends on the solution of two problems: (1) computation of complex-valued beam times and amplitudes in Cartesian (x,z) coordinates, and (2) limiting computations to only those (x,z) coordinates within a region where beam amplitudes are significant. The first problem can be reduced to a particular instance of a class of closest-point problems in computational geometry, for which efficient solutions, such as the Delaunay triangulation, are well known. Delaunay triangulation of sampled points along a ray enables the efficient location of that point on the raypath that is closest to any point (x,z) at which beam times and amplitudes are required. Although Delaunay triangulation provides an efficient solution to this closest point problem, a simpler solution, also presented in this paper, may be sufficient and more easily extended for use in 3-D Gaussian beam migration. The second problem is easily solved by decomposing the subsurface image into a coarse grid of square cells. Within each cell, simple and efficient loops over (x,z) coordinates may be used. Because the region in which beam amplitudes are significant may be difficult to represent with simple loops over (x,z) coordinates, I use recursion to move from cell to cell, until entire region defined by the beam has been covered. Benchmark tests of a computer program implementing these solutions suggest that the cost of Gaussian hewn migration is comparable to that of migration via explicit depth extrapolation in the frequency-space domain. For the data sizes and computer programs tested here, the explicit method was faster. However, as data size was increased, the computation time for Gaussian beam migration grew more slowly than that for the explicit method.

  4. Computational aspects of Gaussian beam migration

    SciTech Connect

    Hale, D.

    1992-01-01

    The computational efficiency of Gaussian beam migration depends on the solution of two problems: (1) computation of complex-valued beam times and amplitudes in Cartesian (x,z) coordinates, and (2) limiting computations to only those (x,z) coordinates within a region where beam amplitudes are significant. The first problem can be reduced to a particular instance of a class of closest-point problems in computational geometry, for which efficient solutions, such as the Delaunay triangulation, are well known. Delaunay triangulation of sampled points along a ray enables the efficient location of that point on the raypath that is closest to any point (x,z) at which beam times and amplitudes are required. Although Delaunay triangulation provides an efficient solution to this closest point problem, a simpler solution, also presented in this paper, may be sufficient and more easily extended for use in 3-D Gaussian beam migration. The second problem is easily solved by decomposing the subsurface image into a coarse grid of square cells. Within each cell, simple and efficient loops over (x,z) coordinates may be used. Because the region in which beam amplitudes are significant may be difficult to represent with simple loops over (x,z) coordinates, I use recursion to move from cell to cell, until entire region defined by the beam has been covered. Benchmark tests of a computer program implementing these solutions suggest that the cost of Gaussian hewn migration is comparable to that of migration via explicit depth extrapolation in the frequency-space domain. For the data sizes and computer programs tested here, the explicit method was faster. However, as data size was increased, the computation time for Gaussian beam migration grew more slowly than that for the explicit method.

  5. Central control element expands computer capability

    NASA Technical Reports Server (NTRS)

    Easton, R. A.

    1975-01-01

    Redundant processing and multiprocessing modes can be obtained from one computer by using logic configuration. Configuration serves as central control element which can automatically alternate between high-capacity multiprocessing mode and high-reliability redundant mode using dynamic mode switching in real time.

  6. Some Theoretical Aspects for Elastic Wave Modeling in a Recently Developed Spectral Element Method

    NASA Astrophysics Data System (ADS)

    Wang, X. M.; Seriani, G.; Lin, W. J.

    2006-10-01

    A spectral element method has been recently developed for solving elastodynamic problems. The numerical solutions are obtained by using the weak formulation of the elastodynamic equation for heterogeneous media and by the Galerkin approach applied to a partition, in small subdomains, of the original physical domain under investigation. In the present work some mathematical aspects of the method and of the associated algorithm implementation are systematically investigated. Two kinds of orthogonal basis functions, constructed with Legendre and Chebyshev polynomials, and their related Gauss-Lobbatto collocation points, used in reference element quadrature, are introduced. The related analytical integration formulas are obtained. The standard error estimations and expansion convergence are discussed. In order to improve the computation accuracy and efficiency, an element-by-element pre-conditioned conjugate gradient linear solver in the space domain and a staggered predictor/multi-corrector algorithm in the time integration are used for strong heterogeneous elastic media. As a consequence neither the global matrices, nor the effective force vector is assembled. When analytical formula are used for the element quadrature, there is even no need for forming element matrix in order to further save memory without loosing much in computational efficiency. The element-by-element algorithm uses an optimal tensor product scheme which makes spectral element methods much more efficient than finite-element methods from the point of view of both memory storage and computational time requirements. This work is divided into two parts. The second part will give the algorithm implementation, numerical accuracy and efficiency analyses, and then the modelling example comparison of the proposed spectral element method with a conventional finite-element method and a staggered pseudo-spectral method that is to be reported in the other work.

  7. Analytical and Computational Aspects of Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Bilevel problem formulations have received considerable attention as an approach to multidisciplinary optimization in engineering. We examine the analytical and computational properties of one such approach, collaborative optimization. The resulting system-level optimization problems suffer from inherent computational difficulties due to the bilevel nature of the method. Most notably, it is impossible to characterize and hence identify solutions of the system-level problems because the standard first-order conditions for solutions of constrained optimization problems do not hold. The analytical features of the system-level problem make it difficult to apply conventional nonlinear programming algorithms. Simple examples illustrate the analysis and the algorithmic consequences for optimization methods. We conclude with additional observations on the practical implications of the analytical and computational properties of collaborative optimization.

  8. Finite element computation with parallel VLSI

    NASA Technical Reports Server (NTRS)

    Mcgregor, J.; Salama, M.

    1983-01-01

    This paper describes a parallel processing computer consisting of a 16-bit microcomputer as a master processor which controls and coordinates the activities of 8086/8087 VLSI chip set slave processors working in parallel. The hardware is inexpensive and can be flexibly configured and programmed to perform various functions. This makes it a useful research tool for the development of, and experimentation with parallel mathematical algorithms. Application of the hardware to computational tasks involved in the finite element analysis method is demonstrated by the generation and assembly of beam finite element stiffness matrices. A number of possible schemes for the implementation of N-elements on N- or n-processors (N is greater than n) are described, and the speedup factors of their time consumption are determined as a function of the number of available parallel processors.

  9. GOCE Satellite Orbit in a Computational Aspect

    NASA Astrophysics Data System (ADS)

    Bobojc, Andrzej; Drozyner, Andrzej

    2013-04-01

    The presented work plays an important role in research of possibility of the Gravity Field and Steady-State Ocean Circulation Explorer Mission (GOCE) satellite orbit improvement using a combination of satellite to satellite tracking high-low (SST- hl) observations and gravity gradient tensor (GGT) measurements. The orbit improvement process will be started from a computed orbit, which should be close to a reference ("true") orbit as much as possible. To realize this objective, various variants of GOCE orbit were generated by means of the Torun Orbit Processor (TOP) software package. The TOP software is based on the Cowell 8th order numerical integration method. This package computes a satellite orbit in the field of gravitational and non-gravitational forces (including the relativistic and empirical accelerations). The three sets of 1-day orbital arcs were computed using selected geopotential models and additional accelerations generated by the Moon, the Sun, the planets, the Earth and ocean tides, the relativity effects. Selected gravity field models include, among other things, the recent models from the GOCE mission and the models such as EIGEN-6S, EIGEN-5S, EIGEN-51C, ITG-GRACE2010S, EGM2008, EGM96. Each set of 1-day orbital arcs corresponds to the GOCE orbit for arbitrary chosen date. The obtained orbits were compared to the GOCE reference orbits (Precise Science Orbits of the GOCE satellite delivered by the European Space Agency) using the root mean squares (RMS) of the differences between the satellite positions in the computed orbits and in the reference ones. These RMS values are a measure of performance of selected geopotential models in terms of GOCE orbit computation. The RMS values are given for the truncated and whole geopotential models. For the three variants with the best fit to the reference orbits, the empirical acceleration models were added to the satellite motion model. It allowed for further improving the fitting of computed orbits to the

  10. Plane Smoothers for Multiblock Grids: Computational Aspects

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane

    1999-01-01

    Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.

  11. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  12. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect

    Rahul Ravindrudu

    2004-12-19

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  13. Computational Aspects of N-Mixture Models

    PubMed Central

    Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S

    2015-01-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629

  14. Physical aspects of computing the flow of a viscous fluid

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1984-01-01

    One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.

  15. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  16. On Undecidability Aspects of Resilient Computations and Implications to Exascale

    SciTech Connect

    Rao, Nageswara S

    2014-01-01

    Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

  17. Vitamins and trace elements: practical aspects of supplementation.

    PubMed

    Berger, Mette M; Shenkin, Alan

    2006-09-01

    The role of micronutrients in parenteral nutrition include the following: (1) Whenever artificial nutrition is indicated, micronutrients, i.e., vitamins and trace elements, should be given from the first day of artificial nutritional support. (2) Testing blood levels of vitamins and trace elements in acutely ill patients is of very limited value. By using sensible clinical judgment, it is possible to manage patients with only a small amount of laboratory testing. (3) Patients with major burns or major trauma and those with acute renal failure who are on continuous renal replacement therapy or dialysis quickly develop acute deficits in some micronutrients, and immediate supplementation is essential. (4) Other groups at risk are cancer patients, but also pregnant women with hyperemesis and people with anorexia nervosa or other malnutrition or malabsorption states. (5) Clinicians need to treat severe deficits before they become clinical deficiencies. If a patient develops a micronutrient deficiency state while in care, then there has been a severe failure of care. (6) In the early acute phase of recovery from critical illness, where artificial nutrition is generally not indicated, there may still be a need to deliver micronutrients to specific categories of very sick patients. (7) Ideally, trace element preparations should provide a low-manganese product for all and a manganese-free product for certain patients with liver disease. (8) High losses through excretion should be minimized by infusing micronutrients slowly, over as long a period as possible. To avoid interactions, it would be ideal to infuse trace elements and vitamins separately: the trace elements over an initial 12-h period and the vitamins over the next 12-h period. (9) Multivitamin and trace element preparations suitable for most patients requiring parenteral nutrition are widely available, but individual patients may require additional supplements or smaller amounts of certain micronutrients

  18. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  19. Critical Elements of Computer Literacy for Teachers.

    ERIC Educational Resources Information Center

    Overbaugh, Richard C.

    A definition of computer literacy is developed that is broad enough to apply to educators in general, but which leaves room for specificity for particular situations and content areas. The following general domains that comprise computer literacy for all educators are addressed: (1) general computer operations; (2) software, including computer…

  20. Aspects of the major element composition of Halley's dust

    NASA Astrophysics Data System (ADS)

    Jessberger, E. K.; Christoforidis, A.; Kissel, J.

    1988-04-01

    Further attempts to extract chemical information on the solid dust particles of Comet Halley from impact-ionization time-of-flight mass spectrometry are described. Results on average compositions, element groupings, CHON particles, and silicates are discussed. Halley's dust in the vicinity of the Vega-1 spacecraft is found to be a mixture of a refractory organic component and unequilibrated silicon, but detailed chemical information on individual particles is difficult to extract because of the complexity of the impact-ionization process.

  1. Element-by-element and implicit-explicit finite element formulations for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.

    1988-01-01

    Preconditioner algorithms to reduce the computational effort in FEM analyses of large-scale fluid-dynamics problems are presented. A general model problem is constructed on the basis of the convection-diffusion equation and the two-dimensional vorticity/stream-function formulation of the Navier-Stokes equations; this problem is then analyzed using element-by-element, implicit-explicit, and adaptive implicit-explicit approximation schemes. Numerical results for the two-dimensional advection and rigid-body rotation of a cosine hill, flow past a circular cylinder, and driven cavity flow are presented in extensive graphs and shown to be in good agreement with those obtained using implicit methods.

  2. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  3. Computers in the Library: The Human Element.

    ERIC Educational Resources Information Center

    Magrath, Lynn L.

    1982-01-01

    Discusses library staff and public reaction to the computerization of library operations at the Pikes Peak Library District in Colorado Springs. An outline of computer applications implemented since the inception of the program in 1975 is included. (EJS)

  4. Cohesive surface model for fracture based on a two-scale formulation: computational implementation aspects

    NASA Astrophysics Data System (ADS)

    Toro, S.; Sánchez, P. J.; Podestá, J. M.; Blanco, P. J.; Huespe, A. E.; Feijóo, R. A.

    2016-07-01

    The paper describes the computational aspects and numerical implementation of a two-scale cohesive surface methodology developed for analyzing fracture in heterogeneous materials with complex micro-structures. This approach can be categorized as a semi-concurrent model using the representative volume element concept. A variational multi-scale formulation of the methodology has been previously presented by the authors. Subsequently, the formulation has been generalized and improved in two aspects: (i) cohesive surfaces have been introduced at both scales of analysis, they are modeled with a strong discontinuity kinematics (new equations describing the insertion of the macro-scale strains, into the micro-scale and the posterior homogenization procedure have been considered); (ii) the computational procedure and numerical implementation have been adapted for this formulation. The first point has been presented elsewhere, and it is summarized here. Instead, the main objective of this paper is to address a rather detailed presentation of the second point. Finite element techniques for modeling cohesive surfaces at both scales of analysis (FE^2 approach) are described: (i) finite elements with embedded strong discontinuities are used for the macro-scale simulation, and (ii) continuum-type finite elements with high aspect ratios, mimicking cohesive surfaces, are adopted for simulating the failure mechanisms at the micro-scale. The methodology is validated through numerical simulation of a quasi-brittle concrete fracture problem. The proposed multi-scale model is capable of unveiling the mechanisms that lead from the material degradation phenomenon at the meso-structural level to the activation and propagation of cohesive surfaces at the structural scale.

  5. Computational aspects in high intensity ultrasonic surgery planning.

    PubMed

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. PMID:19740625

  6. Higher-Order Finite Elements for Computing Thermal Radiation

    NASA Technical Reports Server (NTRS)

    Gould, Dana C.

    2004-01-01

    Two variants of the finite-element method have been developed for use in computational simulations of radiative transfers of heat among diffuse gray surfaces. Both variants involve the use of higher-order finite elements, across which temperatures and radiative quantities are assumed to vary according to certain approximations. In this and other applications, higher-order finite elements are used to increase (relative to classical finite elements, which are assumed to be isothermal) the accuracies of final numerical results without having to refine computational meshes excessively and thereby incur excessive computation times. One of the variants is termed the radiation sub-element (RSE) method, which, itself, is subject to a number of variations. This is the simplest and most straightforward approach to representation of spatially variable surface radiation. Any computer code that, heretofore, could model surface-to-surface radiation can incorporate the RSE method without major modifications. In the basic form of the RSE method, each finite element selected for use in computing radiative heat transfer is considered to be a parent element and is divided into sub-elements for the purpose of solving the surface-to-surface radiation-exchange problem. The sub-elements are then treated as classical finite elements; that is, they are assumed to be isothermal, and their view factors and absorbed heat fluxes are calculated accordingly. The heat fluxes absorbed by the sub-elements are then transferred back to the parent element to obtain a radiative heat flux that varies spatially across the parent element. Variants of the RSE method involve the use of polynomials to interpolate and/or extrapolate to approximate spatial variations of physical quantities. The other variant of the finite-element method is termed the integration method (IM). Unlike in the RSE methods, the parent finite elements are not subdivided into smaller elements, and neither isothermality nor other

  7. Adaptive Finite-Element Computation In Fracture Mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1995-01-01

    Report discusses recent progress in use of solution-adaptive finite-element computational methods to solve two-dimensional problems in linear elastic fracture mechanics. Method also shown extensible to three-dimensional problems.

  8. Optically intraconnected computer employing dynamically reconfigurable holographic optical element

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A. (Inventor)

    1992-01-01

    An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.

  9. Algorithms for computer detection of symmetry elements in molecular systems.

    PubMed

    Beruski, Otávio; Vidal, Luciano N

    2014-02-01

    Simple procedures for the location of proper and improper rotations and reflexion planes are presented. The search is performed with a molecule divided into subsets of symmetrically equivalent atoms (SEA) which are analyzed separately as if they were a single molecule. This approach is advantageous in many aspects. For instance, in those molecules that are symmetric rotors, the number of atoms and the inertia tensor of the SEA provide one straight way to find proper rotations of any order. The algorithms are invariant to the molecular orientation and their computational cost is low, because the main information required to find symmetry elements is interatomic distances and the principal moments of the SEA. For example, our Fortran implementation, running on a single processor, took only a few seconds to locate all 120 symmetry operations of the large and highly symmetrical fullerene C720, belonging to the Ih point group. Finally, we show how the interatomic distances matrix of a slightly unsymmetrical molecule is used to symmetrize its geometry. PMID:24403016

  10. Secular perturbation theory and computation of asteroid proper elements

    NASA Technical Reports Server (NTRS)

    Milani, Andrea; Knezevic, Zoran

    1991-01-01

    A new theory for the calculation of proper elements is presented. This theory defines an explicit algorithm applicable to any chosen set of orbits and accounts for the effect of shallow resonances on secular frequencies. The proper elements are computed with an iterative algorithm and the behavior of the iteration can be used to define a quality code.

  11. A computer graphics program for general finite element analyses

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Sawyer, L. M.

    1978-01-01

    Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.

  12. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  13. Computational aspects of steel fracturing pertinent to naval requirements.

    PubMed

    Matic, Peter; Geltmacher, Andrew; Rath, Bhakta

    2015-03-28

    Modern high strength and ductile steels are a key element of US Navy ship structural technology. The development of these alloys spurred the development of modern structural integrity analysis methods over the past 70 years. Strength and ductility provided the designers and builders of navy surface ships and submarines with the opportunity to reduce ship structural weight, increase hull stiffness, increase damage resistance, improve construction practices and reduce maintenance costs. This paper reviews how analytical and computational tools, driving simulation methods and experimental techniques, were developed to provide ongoing insights into the material, damage and fracture characteristics of these alloys. The need to understand alloy fracture mechanics provided unique motivations to measure and model performance from structural to microstructural scales. This was done while accounting for the highly nonlinear behaviours of both materials and underlying fracture processes. Theoretical methods, data acquisition strategies, computational simulation and scientific imaging were applied to increasingly smaller scales and complex materials phenomena under deformation. Knowledge gained about fracture resistance was used to meet minimum fracture initiation, crack growth and crack arrest characteristics as part of overall structural integrity considerations. PMID:25713445

  14. The finite element machine - An assessment of the impact of parallel computing on future finite element computations

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.

    1986-01-01

    The requirements of complex aerospace vehicles combined with the age of structural analysis systems enhance the need to advance technology toward a new generation of structural analysis capability. Recent and impeding advances in parallel and supercomputers provide the opportunity to significantly improve these structural analysis capabilities for large order finite element problems. Long-term research in parallel computing, associated with the NASA Finite Element Machine project, is discussed. The results show the potential of parallel computers to provide substantial increases in computation speed over sequential computers. Results are given for sample problems in the areas of eigenvalue analysis and transient response.

  15. The Impact of Instructional Elements in Computer-Based Instruction

    ERIC Educational Resources Information Center

    Martin, Florence; Klein, James D.; Sullivan, Howard

    2007-01-01

    This study investigated the effects of several elements of instruction (objectives, information, practice, examples and review) when they were combined in a systematic manner. College students enrolled in a computer literacy course used one of six different versions of a computer-based lesson delivered on the web to learn about input, processing,…

  16. Acceleration of matrix element computations for precision measurements

    SciTech Connect

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.

  17. Introducing the Practical Aspects of Computational Chemistry to Undergraduate Chemistry Students

    ERIC Educational Resources Information Center

    Pearson, Jason K.

    2007-01-01

    Various efforts are being made to introduce the different physical aspects and uses of computational chemistry to the undergraduate chemistry students. A new laboratory approach that demonstrates all such aspects via experiments has been devised for the purpose.

  18. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  19. Finite element dynamic analysis on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lambiotte, J. J., Jr.

    1978-01-01

    Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.

  20. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  1. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  2. Some aspects of the computer simulation of conduction heat transfer and phase change processes

    SciTech Connect

    Solomon, A. D.

    1982-04-01

    Various aspects of phase change processes in materials are discussd including computer modeling, validation of results and sensitivity. In addition, the possible incorporation of cognitive activities in computational heat transfer is examined.

  3. Rad-hard computer elements for space applications

    NASA Technical Reports Server (NTRS)

    Krishnan, G. S.; Longerot, Carl D.; Treece, R. Keith

    1993-01-01

    Space Hardened CMOS computer elements emulating a commercial microcontroller and microprocessor family have been designed, fabricated, qualified, and delivered for a variety of space programs including NASA's multiple launch International Solar-Terrestrial Physics (ISTP) program, Mars Observer, and government and commercial communication satellites. Design techniques and radiation performance of the 1.25 micron feature size products are described.

  4. On the effects of grid ill-conditioning in three dimensional finite element vector potential magnetostatic field computations

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1990-01-01

    The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.

  5. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  6. A computational study of nodal-based tetrahedral element behavior.

    SciTech Connect

    Gullerud, Arne S.

    2010-09-01

    This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.

  7. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  8. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  9. A stochastic method for computing hadronic matrix elements

    DOE PAGESBeta

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Drach, Vincent; Jansen, Karl; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  10. Computational design aspects of a NASP nozzle/afterbody experiment

    NASA Technical Reports Server (NTRS)

    Ruffin, Stephen M.; Venkatapathy, Ethiraj; Keener, Earl R.; Nagaraj, N.

    1989-01-01

    This paper highlights the influence of computational methods on design of a wind tunnel experiment which generically models the nozzle/afterbody flow field of the proposed National Aerospace Plane. The rectangular slot nozzle plume flow field is computed using a three-dimensional, upwind, implicit Navier-Stokes solver. Freestream Mach numbers of 5.3, 7.3, and 10 are investigated. Two-dimensional parametric studies of various Mach numbers, pressure ratios, and ramp angles are used to help determine model loads and afterbody ramp angle and length. It was found that the center of pressure on the ramp occurs at nearly the same location for all ramp angles and test conditions computed. Also, to prevent air liquefaction, it is suggested that a helium-air mixture be used as the jet gas for the highest Mach number test case.

  11. Implicit extrapolation methods for multilevel finite element computations

    SciTech Connect

    Jung, M.; Ruede, U.

    1994-12-31

    The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.

  12. Huber's M-estimation in relative GPS positioning: computational aspects

    NASA Astrophysics Data System (ADS)

    Chang, X.-W.; Guo, Y.

    2005-08-01

    When GPS signal measurements have outliers, using least squares (LS) estimation is likely to give poor position estimates. One of the typical approaches to handle this problem is to use robust estimation techniques. We study the computational issues of Huber’s M-estimation applied to relative GPS positioning. First for code-based relative positioning, we use simulation results to show that Newton’s method usually converges faster than the iteratively reweighted least squares (IRLS) method, which is often used in geodesy for computing robust estimates of parameters. Then for code- and carrier-phase-based relative positioning, we present a recursive modified Newton method to compute Huber’s M-estimates of the positions. The structures of the model are exploited to make the method efficient, and orthogonal transformations are used to ensure numerical reliability of the method. Economical use of computer memory is also taken into account in designing the method. Simulation results show that the method is effective.

  13. Administrative and Financial Aspects of Computers in Education

    ERIC Educational Resources Information Center

    Rush, James E.

    1970-01-01

    Paper presented at the Education and Information Science Symposium," Sponsored by the Ohio Chapters of the American Society for Information Science in cooperation with The Department of Computer and Information Science, The Ohio State University, June 23 and 24, 1969. (MF)

  14. Technical Aspects of Computer-Assisted Instruction in Chinese.

    ERIC Educational Resources Information Center

    Cheng, Chin-Chaun; Sherwood, Bruce

    1981-01-01

    Computer assisted instruction in Chinese is considered in relation to the design and recognition of Chinese characters, speech synthesis of the standard Chinese language, and the identification of Chinese tone. The PLATO work has shifted its orientation from provision of supplementary courseware to implementation of independent lessons and…

  15. Some Aspects of uncertainty in computational fluid dynamics results

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1991-01-01

    Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.

  16. Fast computation of the acoustic field for ultrasound elements.

    PubMed

    Güven, H Emre; Miller, Eric L; Cleveland, Robin O

    2009-09-01

    A fast method for computing the acoustic field of ultrasound transducers is presented with application to rectangular elements that are cylindrically focused. No closed-form solutions exist for this case but several numerical techniques have been described in the ultrasound imaging literature. Our motivation is the rapid calculation of imaging kernels for physics-based diagnostic imaging for which current methods are too computationally intensive. Here, the surface integral defining the acoustic field from a baffled piston is converted to a 3-D spatial convolution of the element surface and the Green's function. A 3-D version of the overlap-save method from digital signal processing is employed to obtain a fast computational algorithm based on spatial Fourier transforms. Further efficiency is gained by using a separable approximation to the Green's function through singular value decomposition and increasing the effective sampling rate by polyphase filtering. The tradeoff between accuracy and spatial sampling rate is explored to determine appropriate parameters for a specific transducer. Comparisons with standard tools such as Field II are presented, where nearly 2 orders of magnitude improvement in computation speed is observed for similar accuracy. PMID:19811993

  17. Acceleration of matrix element computations for precision measurements

    DOE PAGESBeta

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix elementmore » technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.« less

  18. Boundary element analysis on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Kane, J. H.

    1994-01-01

    Boundary element analysis (BEA) can be characterized as a numerical technique that generally shifts the computational burden in the analysis toward numerical integration and the solution of nonsymmetric and either dense or blocked sparse systems of algebraic equations. Researchers have explored the concept that the fundamental characteristics of BEA can be exploited to generate effective implementations on vector and parallel computers. In this paper, the results of some of these investigations are discussed. The performance of overall algorithms for BEA on vector supercomputers, massively data parallel single instruction multiple data (SIMD), and relatively fine grained distributed memory multiple instruction multiple data (MIMD) computer systems is described. Some general trends and conclusions are discussed, along with indications of future developments that may prove fruitful in this regard.

  19. Compute Element and Interface Box for the Hazard Detection System

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.; Spaulding, Matthew D.; Luna, Michael E.; Motaghedi, Shui H.; Trawny, Nikolas; Johnson, Andrew E.; Ivanov, Tonislav I.; Huertas, Andres; Whitaker, William D.; Goldberg, Steven B.

    2013-01-01

    The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.

  20. Continuum mechanical and computational aspects of material behavior

    SciTech Connect

    Fried, Eliot; Gurtin, Morton E.

    2000-02-10

    The focus of the work is the application of continuum mechanics to materials science, specifically to the macroscopic characterization of material behavior at small length scales. The long-term goals are a continuum-mechanical framework for the study of materials that provides a basis for general theories and leads to boundary-value problems of physical relevance, and computational methods appropriate to these problems supplemented by physically meaningful regularizations to aid in their solution. Specific studies include the following: the development of a theory of polycrystalline plasticity that incorporates free energy associated with lattice mismatch between grains; the development of a theory of geometrically necessary dislocations within the context of finite-strain plasticity; the development of a gradient theory for single-crystal plasticity with geometrically necessary dislocations; simulations of dynamical fracture using a theory that allows for the kinking and branching of cracks; computation of segregation and compaction in flowing granular materials.

  1. Computational aspects of sensitivity calculations in linear transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, W. H.; Haftka, R. T.

    1991-01-01

    The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.

  2. Theoretical aspects of light-element alloys under extremely high pressure

    NASA Astrophysics Data System (ADS)

    Feng, Ji

    In this Dissertation, we present theoretical studies on the geometric and electronic structure of light-element alloys under high pressure. The first three Chapters are concerned with specific compounds, namely, SiH 4, CaLi2 and BexLi1- x, and associated structural and electronic phenomena, arising in our computational studies. In the fourth Chapter, we attempt to develop a unified view of the relationship between the electronic and geometric structure of light-element alloys under pressure, by focusing on the states near the Fermi level in these metals.

  3. Computational aspects of the continuum quaternionic wave functions for hydrogen

    SciTech Connect

    Morais, J.

    2014-10-15

    Over the past few years considerable attention has been given to the role played by the Hydrogen Continuum Wave Functions (HCWFs) in quantum theory. The HCWFs arise via the method of separation of variables for the time-independent Schrödinger equation in spherical coordinates. The HCWFs are composed of products of a radial part involving associated Laguerre polynomials multiplied by exponential factors and an angular part that is the spherical harmonics. In the present paper we introduce the continuum wave functions for hydrogen within quaternionic analysis ((R)QHCWFs), a result which is not available in the existing literature. In particular, the underlying functions are of three real variables and take on either values in the reduced and full quaternions (identified, respectively, with R{sup 3} and R{sup 4}). We prove that the (R)QHCWFs are orthonormal to one another. The representation of these functions in terms of the HCWFs are explicitly given, from which several recurrence formulae for fast computer implementations can be derived. A summary of fundamental properties and further computation of the hydrogen-like atom transforms of the (R)QHCWFs are also discussed. We address all the above and explore some basic facts of the arising quaternionic function theory. As an application, we provide the reader with plot simulations that demonstrate the effectiveness of our approach. (R)QHCWFs are new in the literature and have some consequences that are now under investigation.

  4. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  5. Computational and theoretical aspects of biomolecular structure and dynamics

    SciTech Connect

    Garcia, A.E.; Berendzen, J.; Catasti, P., Chen, X.

    1996-09-01

    This is the final report for a project that sought to evaluate and develop theoretical, and computational bases for designing, performing, and analyzing experimental studies in structural biology. Simulations of large biomolecular systems in solution, hydrophobic interactions, and quantum chemical calculations for large systems have been performed. We have developed a code that implements the Fast Multipole Algorithm (FMA) that scales linearly in the number of particles simulated in a large system. New methods have been developed for the analysis of multidimensional NMR data in order to obtain high resolution atomic structures. These methods have been applied to the study of DNA sequences in the human centromere, sequences linked to genetic diseases, and the dynamics and structure of myoglobin.

  6. Behavioral and computational aspects of language and its acquisition

    NASA Astrophysics Data System (ADS)

    Edelman, Shimon; Waterfall, Heidi

    2007-12-01

    One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.

  7. Computational analysis of promoter elements and chromatin features in yeast.

    PubMed

    Wyrick, John J

    2012-01-01

    Regulatory elements in promoter sequences typically function as binding sites for transcription factor proteins and thus are critical determinants of gene transcription. There is growing evidence that chromatin features, such as histone modifications or nucleosome positions, also have important roles in transcriptional regulation. Recent functional genomics and computational studies have yielded extensive datasets cataloging transcription factor binding sites (TFBS) and chromatin features, such as nucleosome positions, throughout the yeast genome. However, much of this data can be difficult to navigate or analyze efficiently. This chapter describes practical methods for the visualization, data mining, and statistical analysis of yeast promoter elements and chromatin features using two Web-accessible bioinformatics databases: ChromatinDB and Ceres. PMID:22113279

  8. Chemical aspects of pellet-cladding interaction in light water reactor fuel elements

    SciTech Connect

    Olander, D.R.

    1982-01-01

    In contrast to the extensive literature on the mechanical aspects of pellet-cladding interaction (PCI) in light water reactor fuel elements, the chemical features of this phenomenon are so poorly understood that there is still disagreement concerning the chemical agent responsible. Since the earliest work by Rosenbaum, Davies and Pon, laboratory and in-reactor experiments designed to elucidate the mechanism of PCI fuel rod failures have concentrated almost exclusively on iodine. The assumption that this is the reponsible chemical agent is contained in models of PCI which have been constructed for incorporation into fuel performance codes. The evidence implicating iodine is circumstantial, being based primarily upon the volatility and significant fission yield of this element and on the microstructural similarity of the failed Zircaloy specimens exposed to iodine in laboratory stress corrosion cracking (SCC) tests to cladding failures by PCI.

  9. SYMBMAT: Symbolic computation of quantum transition matrix elements

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.

    2012-08-01

    We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem

  10. Aspects of Quantum Computing with Polar Paramagnetic Molecules

    NASA Astrophysics Data System (ADS)

    Karra, Mallikarjun; Friedrich, Bretislav

    2015-05-01

    Since the original proposal by DeMille, arrays of optically trapped ultracold polar molecules have been considered among the most promising prototype platforms for the implementation of a quantum computer. The qubit of a molecular array is realized by a single dipolar molecule entangled via its dipole-dipole interaction with the rest of the array's molecules. A superimposed inhomogeneous electric field precludes the quenching of the body-fixed dipole moments by rotation and a time dependent external field controls the qubits to perform gate operations. Much like our previous work in which we considered the simplest cases of a polar 1 Σ and a symmetric top molecule, here we consider a X2Π3 / 2 polar molecule (exemplified by the OH radical) which, by virtue of its nonzero electronic spin and orbital angular momenta, is, in addition, paramagnetic. We demonstrate entanglement tuning by evaluating the concurrence (and the requisite frequencies needed for gate operations) between two such molecules in the presence of varying electric and magnetic fields. Finally, we discuss the conditions required for achieving qubit addressability (transition frequency difference, Δω , as compared with the concomitant Stark and Zeeman broadening) and high fidelity. International Max Planck Research School - Functional Interfaces in Physics and Chemistry.

  11. Massively parallel computation of RCS with finite elements

    NASA Technical Reports Server (NTRS)

    Parker, Jay

    1993-01-01

    One of the promising combinations of finite element approaches for scattering problems uses Whitney edge elements, spherical vector wave-absorbing boundary conditions, and bi-conjugate gradient solution for the frequency-domain near field. Each of these approaches may be criticized. Low-order elements require high mesh density, but also result in fast, reliable iterative convergence. Spherical wave-absorbing boundary conditions require additional space to be meshed beyond the most minimal near-space region, but result in fully sparse, symmetric matrices which keep storage and solution times low. Iterative solution is somewhat unpredictable and unfriendly to multiple right-hand sides, yet we find it to be uniformly fast on large problems to date, given the other two approaches. Implementation of these approaches on a distributed memory, message passing machine yields huge dividends, as full scalability to the largest machines appears assured and iterative solution times are well-behaved for large problems. We present times and solutions for computed RCS for a conducting cube and composite permeability/conducting sphere on the Intel ipsc860 with up to 16 processors solving over 200,000 unknowns. We estimate problems of approximately 10 million unknowns, encompassing 1000 cubic wavelengths, may be attempted on a currently available 512 processor machine, but would be exceedingly tedious to prepare. The most severe bottlenecks are due to the slow rate of mesh generation on non-parallel machines and the large transfer time from such a machine to the parallel processor. One solution, in progress, is to create and then distribute a coarse mesh among the processors, followed by systematic refinement within each processor. Elimination of redundant node definitions at the mesh-partition surfaces, snap-to-surface post processing of the resulting mesh for good modelling of curved surfaces, and load-balancing redistribution of new elements after the refinement are auxiliary

  12. Incorporating Knowledge of Legal and Ethical Aspects into Computing Curricula of South African Universities

    ERIC Educational Resources Information Center

    Wayman, Ian; Kyobe, Michael

    2012-01-01

    As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…

  13. Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.

    ERIC Educational Resources Information Center

    Deaudelin, Colette; Dussault, Marc; Brodeur, Monique

    2003-01-01

    Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…

  14. SYMBMAT: Symbolic computation of quantum transition matrix elements

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.

    2012-08-01

    We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem

  15. A variational multiscale finite element method for monolithic ALE computations of shock hydrodynamics using nodal elements

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Scovazzi, G.

    2016-06-01

    We present a monolithic arbitrary Lagrangian-Eulerian (ALE) finite element method for computing highly transient flows with strong shocks. We use a variational multiscale (VMS) approach to stabilize a piecewise-linear Galerkin formulation of the equations of compressible flows, and an entropy artificial viscosity to capture strong solution discontinuities. Our work demonstrates the feasibility of VMS methods for highly transient shock flows, an area of research for which the VMS literature is extremely scarce. In addition, the proposed monolithic ALE method is an alternative to the more commonly used Lagrangian+remap methods, in which, at each time step, a Lagrangian computation is followed by mesh smoothing and remap (conservative solution interpolation). Lagrangian+remap methods are the methods of choice in shock hydrodynamics computations because they provide nearly optimal mesh resolution in proximity of shock fronts. However, Lagrangian+remap methods are not well suited for imposing inflow and outflow boundary conditions. These issues offer an additional motivation for the proposed approach, in which we first perform the mesh motion, and then the flow computations using the monolithic ALE framework. The proposed method is second-order accurate and stable, as demonstrated by extensive numerical examples in two and three space dimensions.

  16. Computer-integrated finite element modeling of human middle ear.

    PubMed

    Sun, Q; Gan, R Z; Chang, K-H; Dormer, K J

    2002-10-01

    The objective of this study was to produce an improved finite element (FE) model of the human middle ear and to compare the model with human data. We began with a systematic and accurate geometric modeling technique for reconstructing the middle ear from serial sections of a freshly frozen temporal bone. A geometric model of a human middle ear was constructed in a computer-aided design (CAD) environment with particular attention to geometry and microanatomy. Using the geometric model, a working FE model of the human middle ear was created using previously published material properties of middle ear components. This working FE model was finalized by a cross-calibration technique, comparing its predicted stapes footplate displacements with laser Doppler interferometry measurements from fresh temporal bones. The final FE model was shown to be reasonable in predicting the ossicular mechanics of the human middle ear. PMID:14595544

  17. Finite element computations of resonant modes for small magnetic particles

    NASA Astrophysics Data System (ADS)

    Forestiere, C.; d'Aquino, M.; Miano, G.; Serpico, C.

    2009-04-01

    The oscillations of a chain of ferromagnetic nanoparticles around a saturated spatially uniform equilibrium are analyzed by solving the linearized Landau-Lifshitz-Gilbert (LLG) equation. The linearized LLG equation is recast in the form of a generalized eigenvalue problem for suitable self-adjoint operators connected to the micromagnetic effective field, which accounts for exchange, magnetostatic, anisotropy, and Zeeman interactions. The generalized eigenvalue problem is solved numerically by the finite element method, which allows one to treat accurately complex geometries and preserves the structural properties of the continuum problem. The natural frequencies and the spatial distribution of the mode amplitudes are computed for chains composed of several nanoparticles (sphere and ellipsoid). The effects of the interaction between the nanoparticles and the limit of validity of the point dipole approximation are discussed.

  18. Impact of computer advances on future finite elements computations. [for aircraft and spacecraft design

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.

    1985-01-01

    Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

  19. Matrix element method for high performance computing platforms

    NASA Astrophysics Data System (ADS)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  20. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  1. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    NASA Astrophysics Data System (ADS)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  2. A computer program for calculating aerodynamic characteristics of low aspect-ratio wings with partial leading-edge separation

    NASA Technical Reports Server (NTRS)

    Mehrotra, S. C.; Lan, C. E.

    1978-01-01

    The necessary information for using a computer program to predict distributed and total aerodynamic characteristics for low aspect ratio wings with partial leading-edge separation is presented. The flow is assumed to be steady and inviscid. The wing boundary condition is formulated by the Quasi-Vortex-Lattice method. The leading edge separated vortices are represented by discrete free vortex elements which are aligned with the local velocity vector at midpoints to satisfy the force free condition. The wake behind the trailing edge is also force free. The flow tangency boundary condition is satisfied on the wing, including the leading and trailing edges. The program is restricted to delta wings with zero thickness and no camber. It is written in FORTRAN language and runs on CDC 6600 computer.

  3. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  4. A finite element method for the computation of transonic flow past airfoils

    NASA Technical Reports Server (NTRS)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  5. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which

  6. Java Analysis Tools for Element Production Calculations in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Lingerfelt, E.; Hix, W.; Guidry, M.; Smith, M.

    2002-12-01

    We are developing a set of extendable, cross-platform tools and interfaces using Java and vector graphic technologies such as SVG and SWF to facilitate element production calculations in computational astrophysics. The Java technologies are customizable and portable, and can be utilized as stand-alone applications or distributed across a network. These tools, which have broad applications in general scientific visualization, are currently being used to explore and analyze a large library of nuclear reaction rates and visualize results of explosive nucleosynthesis calculations with compact, high quality vector graphics. The facilities for reading and plotting nuclear reaction rates and their components from a network or library permit the user to easily include new rates and compare and adjust current ones. Sophisticated visualization and graphical analysis tools offer the ability to view results in an interactive, scalable vector graphics format, which leads to a dramatic (ten-fold) reduction in visualization file sizes while maintaining high visual quality and interactive control. ORNL Physics Division is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  7. Segment-based vs. element-based integration for mortar methods in computational contact mechanics

    NASA Astrophysics Data System (ADS)

    Farah, Philipp; Popp, Alexander; Wall, Wolfgang A.

    2015-01-01

    Mortar finite element methods provide a very convenient and powerful discretization framework for geometrically nonlinear applications in computational contact mechanics, because they allow for a variationally consistent treatment of contact conditions (mesh tying, non-penetration, frictionless or frictional sliding) despite the fact that the underlying contact surface meshes are non-matching and possibly also geometrically non-conforming. However, one of the major issues with regard to mortar methods is the design of adequate numerical integration schemes for the resulting interface coupling terms, i.e. curve integrals for 2D contact problems and surface integrals for 3D contact problems. The way how mortar integration is performed crucially influences the accuracy of the overall numerical procedure as well as the computational efficiency of contact evaluation. Basically, two different types of mortar integration schemes, which will be termed as segment-based integration and element-based integration here, can be found predominantly in the literature. While almost the entire existing literature focuses on either of the two mentioned mortar integration schemes without questioning this choice, the intention of this paper is to provide a comprehensive and unbiased comparison. The theoretical aspects covered here include the choice of integration rule, the treatment of boundaries of the contact zone, higher-order interpolation and frictional sliding. Moreover, a new hybrid scheme is proposed, which beneficially combines the advantages of segment-based and element-based mortar integration. Several numerical examples are presented for a detailed and critical evaluation of the overall performance of the different schemes within several well-known benchmark problems of computational contact mechanics.

  8. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  9. C-arm cone-beam computed tomography in interventional oncology: technical aspects and clinical applications

    PubMed Central

    Floridi, Chiara; Radaelli, Alessandro; Abi-Jaoudeh, Nadine; Grass, Micheal; Lin, Ming De; Chiaradia, Melanie; Geschwind, Jean-Francois; Kobeiter, Hishman; Squillaci, Ettore; Maleux, Geert; Giovagnoni, Andrea; Brunese, Luca; Wood, Bradford; Carrafiello, Gianpaolo; Rotondo, Antonio

    2014-01-01

    C-arm cone-beam computed tomography (CBCT) is a new imaging technology integrated in modern angiographic systems. Due to its ability to obtain cross-sectional imaging and the possibility to use dedicated planning and navigation software, it provides an informed platform for interventional oncology procedures. In this paper, we highlight the technical aspects and clinical applications of CBCT imaging and navigation in the most common loco-regional oncological treatments. PMID:25012472

  10. Finite Element Technology In Forming Simulations - Theoretical Aspects And Practical Applications Of A New Solid-Shell Element

    SciTech Connect

    Schwarze, M.; Reese, S.

    2007-05-17

    Finite element simulations of sheet metal forming processes are highly non-linear problems. The non-linearity arises not only from the kinematical relations and the material formulation, furthermore the contact between workpiece and the forming tools leads to an increased number of iterations within the Newton-Raphson scheme. This fact puts high demands on the robustness of finite element formulations. For this reason we study the enhanced assumed strain (EAS) concept as proposed in [1]. The goal is to improve the robustness of the solid-shell formulation in deep drawing simulations.

  11. Surveying co-located space geodesy techniques for ITRF computation: statistical aspects

    NASA Astrophysics Data System (ADS)

    Sillard, P.; Sarti, P.; Vittuari, L.

    2003-04-01

    For two years, CNR (ITALY) has been involved in a complete renovation of the way Space Geodesy coloocated instruments are surveyed. Local ties are one of the most problematic part of International Terrestrial Reference Frame (ITRF) computation since the accuracy of Space Geodesy techniques has decreased to a few millimeters level. Therefore everybody now agrees on the fact that local ties are one of the most problematic aspects of the ITRF computation. The CNR has then decided to start a comprehensive reflection on the way local ties should be surveyed between Space Geodesy instruments. This reflection concerns the practical ground operations, the physical definition of a Space Geodesy instrument reference point (especially for VLBI), and the consequent adjustment of the results, as well as their publication. The two first aspects will be presented in an other presentation as the present one will focus on the two last points (statistics and publication). As Space Geodesy has now reached the mm level, local ties must be used in ITRF computation with a full variance covariance matrix available for one site. The talk will present the way this variance can be derived, even when the reference point is implicitly defined, like for VLBI. Some numerical examples will be given of the quality which can be reached through a rigorous statistical treatment of the new approach developed by CNR. The evidence of the significant improvement that can be seen of the ITRF-type computation will also be given.

  12. ElemeNT: a computational tool for detecting core promoter elements.

    PubMed

    Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar

    2015-01-01

    Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression. PMID:26226151

  13. 01010000 01001100 01000001 01011001: Play Elements in Computer Programming

    ERIC Educational Resources Information Center

    Breslin, Samantha

    2013-01-01

    This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…

  14. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  15. Improved plug valve computer-aided design of plug element

    SciTech Connect

    Wordin, J.J.

    1990-02-01

    The purpose of this document is to present derivations of equations for the design of a plug valve and to present a computer program which performs the design calculations based on the derivations. The valve is based on a plug formed from a tractrix of revolution called a pseudosphere. It is of interest to be able to calculate various parameters for the plug for design purposes. For example, the surface area, volume, and center of gravity are important to determine friction and wear of the valve. A computer program in BASIC has been written to perform the design calculations. The appendix contains a computer program listing and verifications of results using approximation methods. A sample run is included along with necessary computer commands to run the program. 1 fig.

  16. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  17. Computational Modeling for the Flow Over a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun

    1999-01-01

    The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.

  18. Some aspects of statistical distribution of trace element concentrations in biomedical samples

    NASA Astrophysics Data System (ADS)

    Majewska, U.; Braziewicz, J.; Banaś , D.; Kubala-Kukuś , A.; Góź Dź , S.; Pajek, M.; Zadrozsolarna, M.; Jaskóla, M.; Czyzsolarewski, T.

    1999-04-01

    Concentrations of trace elements in biomedical samples were studied using X-ray fluorescence (XRF), total reflection X-ray fluorescence (TRXRF) and particle-induced X-ray emission (PIXE) methods. Used analytical methods were compared in terms of their detection limits and applicability for studying the trace elements in large populations of biomedical samples. In a result, the XRF and TRXRF methods were selected to be used for the trace element concentration measurements in the urine and woman full-term placenta samples. The measured trace element concentration distributions were found to be strongly asymmetric and described by the logarithmic-normal distribution. Such a distribution is expected for the random sequential process, which realistically models a level of trace elements in studied biomedical samples. The importance and consequences of this finding are discussed, especially in the context of comparison of the concentration measurements in different populations of biomedical samples.

  19. Computational aspects of zonal algorithms for solving the compressible Navier-Stokes equations in three dimensions

    NASA Technical Reports Server (NTRS)

    Holst, T. L.; Thomas, S. D.; Kaynak, U.; Gundy, K. L.; Flores, J.; Chaderjian, N. M.

    1985-01-01

    Transonic flow fields about wing geometries are computed using an Euler/Navier-Stokes approach in which the flow field is divided into several zones. The flow field immediately adjacent to the wing surface is resolved with fine grid zones and solved using a Navier-Stokes algorithm. Flow field regions removed from the wing are resolved with less finely clustered grid zones and are solved with an Euler algorithm. Computational issues associated with this zonal approach, including data base management aspects, are discussed. Solutions are obtained that are in good agreement with experiment, including cases with significant wind tunnel wall effects. Additional cases with significant shock induced separation on the upper wing surface are also presented.

  20. Numerical algorithms for finite element computations on arrays of microprocessors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1981-01-01

    The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.

  1. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review

    PubMed Central

    Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  2. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review.

    PubMed

    Bhattacharya, Preeti Tomar; Misra, Satya Ranjan; Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  3. Formulation and computational aspects of plasticity and damage models with application to quasi-brittle materials

    SciTech Connect

    Chen, Z.; Schreyer, H.L.

    1995-09-01

    The response of underground structures and transportation facilities under various external loadings and environments is critical for human safety as well as environmental protection. Since quasi-brittle materials such as concrete and rock are commonly used for underground construction, the constitutive modeling of these engineering materials, including post-limit behaviors, is one of the most important aspects in safety assessment. From experimental, theoretical, and computational points of view, this report considers the constitutive modeling of quasi-brittle materials in general and concentrates on concrete in particular. Based on the internal variable theory of thermodynamics, the general formulations of plasticity and damage models are given to simulate two distinct modes of microstructural changes, inelastic flow and degradation of material strength and stiffness, that identify the phenomenological nonlinear behaviors of quasi-brittle materials. The computational aspects of plasticity and damage models are explored with respect to their effects on structural analyses. Specific constitutive models are then developed in a systematic manner according to the degree of completeness. A comprehensive literature survey is made to provide the up-to-date information on prediction of structural failures, which can serve as a reference for future research.

  4. Finite element computer model of microwave heated ceramics

    SciTech Connect

    Liqiu Zhou; Gang Liu; Jian Zhou

    1995-12-31

    In this paper, a 3-D finite element model to simulate the heating pattern during microwave sintering of ceramics in a TE{sub 10}{sup n} single mode rectangular cavity is described. A series of transient temperature profiles and heating rates of the ceramic cylinder and cubic sample were calculated versus different parameters such as thermal conductivity, dielectric loss factor, microwave power level, and microwave energy distribution. These numerical solutions may provide a better understanding of thermal runaway and solutions to microwave sintering of ceramics.

  5. Computational discovery of regulatory elements in a continuous expression space

    PubMed Central

    2012-01-01

    Approaches for regulatory element discovery from gene expression data usually rely on clustering algorithms to partition the data into clusters of co-expressed genes. Gene regulatory sequences are then mined to find overrepresented motifs in each cluster. However, this ad hoc partition rarely fits the biological reality. We propose a novel method called RED2 that avoids data clustering by estimating motif densities locally around each gene. We show that RED2 detects numerous motifs not detected by clustering-based approaches, and that most of these correspond to characterized motifs. RED2 can be accessed online through a user-friendly interface. PMID:23186104

  6. Elemental: a new framework for distributed memory dense matrix computations.

    SciTech Connect

    Romero, N.; Poulson, J.; Marker, B.; Hammond, J.; Van de Geijn, R.

    2012-02-14

    Parallelizing dense matrix computations to distributed memory architectures is a well-studied subject and generally considered to be among the best understood domains of parallel computing. Two packages, developed in the mid 1990s, still enjoy regular use: ScaLAPACK and PLAPACK. With the advent of many-core architectures, which may very well take the shape of distributed memory architectures within a single processor, these packages must be revisited since the traditional MPI-based approaches will likely need to be extended. Thus, this is a good time to review lessons learned since the introduction of these two packages and to propose a simple yet effective alternative. Preliminary performance results show the new solution achieves competitive, if not superior, performance on large clusters.

  7. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

    SciTech Connect

    XU, J.; COSTANTINO, C.; HOFMAYER, C.

    2006-06-26

    PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

  8. Computer modeling of batteries from non-linear circuit elements

    NASA Technical Reports Server (NTRS)

    Waaben, S.; Federico, J.; Moskowitz, I.

    1983-01-01

    A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.

  9. Computation of Schenberg response function by using finite element modelling

    NASA Astrophysics Data System (ADS)

    Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.

    2016-05-01

    Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.

  10. Experience with automatic, dynamic load balancing and adaptive finite element computation

    SciTech Connect

    Wheat, S.R.; Devine, K.D.; Maccabe, A.B.

    1993-10-01

    Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.

  11. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  12. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    ERIC Educational Resources Information Center

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  13. X-ray microanalysis of cultured keratinocytes: methodological aspects and effects of the irritant sodium lauryl sulphate on elemental composition.

    PubMed

    Grängsjö, A; Pihl-Lundin, I; Lindberg, M; Roomans, G M

    2000-09-01

    Irritant substances have been shown to induce elemental changes in human and animal epidermal cells in situ. However, skin biopsies are a complicated experimental system and artefacts can be introduced by the anaesthesia necessary to take the biopsy. We therefore attempted to set up an experimental system for X-ray microanalysis (XRMA) consisting of cultured human keratinocytes. A number of methodological aspects were studied: different cell types, washing methods and different culture periods for the keratinocytes. It was also investigated whether the keratinocytes responded to exposure to sodium lauryl sulphate (SLS) with changes in their elemental composition. The concentrations of biologically important elements such as Na, Mg, P and K were different in HaCaT cells (a spontaneously immortalized non-tumorigenic cell line derived from adult human keratinocytes) compared to natural human epidermal keratinocytes. The washing procedure and time of culture influenced the intracellular elemental content, and rinsing with distilled water was preferred for further experiments. Changes in the elemental content in the HaCaT cells compatible with a pattern of cell injury followed by repair by cell proliferation were seen after treatment with 3.33 microM and 33 microM SLS. We conclude that XRMA is a useful tool for the study of functional changes in cultured keratinocytes, even though the preparation methods have to be strictly controlled. The method can conceivably be used for predicting effects of different chemicals on human skin. PMID:10971801

  14. A computer program for anisotropic shallow-shell finite elements using symbolic integration

    NASA Technical Reports Server (NTRS)

    Andersen, C. M.; Bowen, J. T.

    1976-01-01

    A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.

  15. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    SciTech Connect

    2010-05-11

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  16. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    None

    2011-10-06

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  17. Computation of vibration mode elastic-rigid and effective weight coefficients from finite-element computer program output

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1991-01-01

    Post-processing algorithms are given to compute the vibratory elastic-rigid coupling matrices and the modal contributions to the rigid-body mass matrices and to the effective modal inertias and masses. Recomputation of the elastic-rigid coupling matrices for a change in origin is also described. A computational example is included. The algorithms can all be executed by using standard finite-element program eigenvalue analysis output with no changes to existing code or source programs.

  18. Modeling of Rolling Element Bearing Mechanics: Computer Program Updates

    NASA Technical Reports Server (NTRS)

    Ryan, S. G.

    1997-01-01

    The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.

  19. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  20. Multibody system dynamics for bio-inspired locomotion: from geometric structures to computational aspects.

    PubMed

    Boyer, Frédéric; Porez, Mathieu

    2015-04-01

    This article presents a set of generic tools for multibody system dynamics devoted to the study of bio-inspired locomotion in robotics. First, archetypal examples from the field of bio-inspired robot locomotion are presented to prepare the ground for further discussion. The general problem of locomotion is then stated. In considering this problem, we progressively draw a unified geometric picture of locomotion dynamics. For that purpose, we start from the model of discrete mobile multibody systems (MMSs) that we progressively extend to the case of continuous and finally soft systems. Beyond these theoretical aspects, we address the practical problem of the efficient computation of these models by proposing a Newton-Euler-based approach to efficient locomotion dynamics with a few illustrations of creeping, swimming, and flying. PMID:25811531

  1. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Large fractions of Cl and Br associated with separated anorthositic and basaltic clasts and matrix from rusty rock 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O to acid soluble Br, i.e. surface deposits vs possibly phosphate related Br, suggests no appreciable alteration in the original distributions of this element. Weak acid leaching dissolved approx. 50% or more of the phosphorus and of the remaining Cl from most of the breccia components. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. No dependence on degree of brecciation is indicated. The clasts are typical of Apollo 16 rocks. Matrix leaching results and element concentrations suggest that apatite-whitlockite is a component of KREEP.

  2. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Halogens, P, U and Na are reported in anorthositic and basaltic clasts and matrix from rusty rock 66095. Large fractions of Cl and Br associated with the separated phases from 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O- to 0.1 M HNO/sub 3/-soluble Br in the various components suggests no appreciable alteration in the original distributions of this element in the breccia forming processes. Up to 50% or more of the phosphorus and of the non-H/sub 2/O-soluble Cl was dissolved from most of the breccia components by 0.1 M HNO/sub 3/. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. Evidence that phosphates are the major P-phases in the brecia is based on the 0.1 M acid solubility of Cl and P in the matrix sample and on elemental concentrations which are consistent with those of KREEP.

  3. Finite element solution techniques for large-scale problems in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1987-01-01

    Element-by-element approximate factorization, implicit-explicit and adaptive implicit-explicit approximation procedures are presented for the finite-element formulations of large-scale fluid dynamics problems. The element-by-element approximation scheme totally eliminates the need for formation, storage and inversion of large global matrices. Implicit-explicit schemes, which are approximations to implicit schemes, substantially reduce the computational burden associated with large global matrices. In the adaptive implicit-explicit scheme, the implicit elements are selected dynamically based on element level stability and accuracy considerations. This scheme provides implicit refinement where it is needed. The methods are applied to various problems governed by the convection-diffusion and incompressible Navier-Stokes equations. In all cases studied, the results obtained are indistinguishable from those obtained by the implicit formulations.

  4. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  5. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  6. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  7. Mixing characteristics of injector elements in liquid rocket engines - A computational study

    NASA Technical Reports Server (NTRS)

    Lohr, Jonathan C.; Trinh, Huu P.

    1992-01-01

    A computational study has been performed to better understand the mixing characteristics of liquid rocket injector elements. Variations in injector geometry as well as differences in injector element inlet flow conditions are among the areas examined in the study. Most results involve the nonreactive mixing of gaseous fuel with gaseous oxidizer but preliminary results are included that involve the spray combustion of oxidizer droplets. The purpose of the study is to numerically predict flowfield behavior in individual injector elements to a high degree of accuracy and in doing so to determine how various injector element properties affect the flow.

  8. Numerical Aspects of Eigenvalue and Eigenfunction Computations for Chaotic Quantum Systems

    NASA Astrophysics Data System (ADS)

    Bäcker, A.

    Summary: We give an introduction to some of the numerical aspects in quantum chaos. The classical dynamics of two-dimensional area-preserving maps on the torus is illustrated using the standard map and a perturbed cat map. The quantization of area-preserving maps given by their generating function is discussed and for the computation of the eigenvalues a computer program in Python is presented. We illustrate the eigenvalue distribution for two types of perturbed cat maps, one leading to COE and the other to CUE statistics. For the eigenfunctions of quantum maps we study the distribution of the eigenvectors and compare them with the corresponding random matrix distributions. The Husimi representation allows for a direct comparison of the localization of the eigenstates in phase space with the corresponding classical structures. Examples for a perturbed cat map and the standard map with different parameters are shown. Billiard systems and the corresponding quantum billiards are another important class of systems (which are also relevant to applications, for example in mesoscopic physics). We provide a detailed exposition of the boundary integral method, which is one important method to determine the eigenvalues and eigenfunctions of the Helmholtz equation. We discuss several methods to determine the eigenvalues from the Fredholm equation and illustrate them for the stadium billiard. The occurrence of spurious solutions is discussed in detail and illustrated for the circular billiard, the stadium billiard, and the annular sector billiard. We emphasize the role of the normal derivative function to compute the normalization of eigenfunctions, momentum representations or autocorrelation functions in a very efficient and direct way. Some examples for these quantities are given and discussed.

  9. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  10. A finite element computational method for high Reynolds number laminar flows

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook

    1987-01-01

    A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for the Navier-Stokes equations is presented. In the method, the velocity variables are interpolated using complete quadratic shape functions, and the pressure is interpolated using linear shape functions which are defined on a triangular element for the two-dimensional case and on a tetrahedral element for the three-dimensional case. The triangular element and the tetrahedral element are contained inside the complete bi- and tri-quadratic elements for velocity variables for two and three dimensional cases, respectively, so that the pressure is discontinuous across the element boundaries. Example problems considered include: a cavity flow of Reynolds numbers 400 through 10,000; a laminar backward facing step flow; and a laminar flow in a square duct of strong curvature. The computational results compared favorably with the finite difference computational results and/or experimental data available. It was found that the present method can capture the delicate pressure driven recirculation zones, that the method did not yield any spurious pressure modes, and that the method requires fewer grid points than the finite difference methods to obtain comparable computational results.

  11. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  12. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  13. Computational local stiffness analysis of biological cell: High aspect ratio single wall carbon nanotube tip.

    PubMed

    TermehYousefi, Amin; Bagheri, Samira; Shahnazar, Sheida; Rahman, Md Habibur; Kadri, Nahrizul Adib

    2016-02-01

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nanoscale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems, which is a powerful finite element (FE) tool to perform the numerical analysis and visualize the interactions between proposed tip and membrane of the cell. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney-Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well as the applied force of CNT-AFM tip on the contact area of the cell. This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis. PMID:26652417

  14. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  15. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  16. A New Finite Element Approach for Prediction of Aerothermal Loads: Progress in Inviscid Flow Computations

    NASA Technical Reports Server (NTRS)

    Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.

    1985-01-01

    Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.

  17. A different aspect to use of some soft computing methods for landslide susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Akgün, Aykut

    2014-05-01

    In landslide literature, several applications of soft computing methods such as artifical neural networks (ANN), fuzzy inference systems, and decision trees for landslide susceptibility mapping can be found. In many of these studies, the effectiveness and validation of the models used are also discussed. To carry out analyses, more than one software, for example one statistical package and one geographical information systems software (GIS), are generally used together. In this study, four different soft computing techniques were applied for obtaining landslide susceptibility mapping only by one GIS software. For this purpose, Multi Layer Perceptron (MLP) back propagation neural network, Fuzzy Adaptive Resonance Theory (ARTMAP) neural network, Self-organizing Map (SOM) and Classification Tree Analysis (CTA) approaches were applied to the study area. The study area was selected from a part of Trabzon (North Turkey) city which is one of the most landslide prone areas in Turkey. Initially, five landslide conditioning parameters such as lithology, slope gradient, slope aspect, stream power index (SPI), and topographical wetness index (TWI) for the study area were produced in GIS. Then, these parameters were analysed by MLP, Fuzzy ARTMAP, SOM and CART soft computing classifiers of the IDRISI Taiga GIS and remote sensing software. To accomplish the analyses, two main input groups are needed. These are conditioning parameters and training areas. For training areas, initially, landslide inventory map which was obtained by both field studies and topographical analyses was compared with lithological unit classes. With the help of these comparison, frequency ratio (FR) values of landslide occurrence in the study area were determined. Using the FR values, five landslide susceptibility classes were differentiated from the lowest FR to highest FR values. After this differentiation, the training areas representing the landslide susceptibility classes were determined by using FR

  18. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  19. Determination of an Initial Mesh Density for Finite Element Computations via Data Mining

    SciTech Connect

    Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V

    2001-07-23

    Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.

  20. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  1. Numerical Aspects of Nonhydrostatic Implementations Applied to a Parallel Finite Element Tsunami Model

    NASA Astrophysics Data System (ADS)

    Fuchs, A.; Androsov, A.; Harig, S.; Hiller, W.; Rakowsky, N.

    2012-04-01

    Based on the jeopardy of devastating tsunamis and the unpredictability of such events, tsunami modelling as part of warning systems is still a contemporary topic. The tsunami group of Alfred Wegener Institute developed the simulation tool TsunAWI as contribution to the Early Warning System in Indonesia. Although the precomputed scenarios for this purpose qualify for satisfying deliverables, the study of further improvements continues. While TsunAWI is governed by the Shallow Water Equations, an extension of the model is based on a nonhydrostatic approach. At the arrival of a tsunami wave in coastal regions with rough bathymetry, the term containing the nonhydrostatic part of pressure, that is neglected in the original hydrostatic model, gains in importance. In consideration of this term, a better approximation of the wave is expected. Differences of hydrostatic and nonhydrostatic model results are contrasted in the standard benchmark problem of a solitary wave runup on a plane beach. The observation data provided by Titov and Synolakis (1995) serves as reference. The nonhydrostatic approach implies a set of equations that are similar to the Shallow Water Equations, so the variation of the code can be implemented on top. However, this additional routines cause a lot of issues you have to cope with. So far the computations of the model were purely explicit. In the nonhydrostatic version the determination of an additional unknown and the solution of a large sparse system of linear equations is necessary. The latter constitutes the lion's share of computing time and memory requirement. Since the corresponding matrix is only symmetric in structure and not in values, an iterative Krylov Subspace Method is used, in particular the restarted Generalized Minimal Residual Algorithm GMRES(m). With regard to optimization, we present a comparison of several combinations of sequential and parallel preconditioning techniques respective number of iterations and setup

  2. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  3. ParCYCLIC: finite element modelling of earthquake liquefaction response on parallel computers

    NASA Astrophysics Data System (ADS)

    Peng, Jun; Lu, Jinchi; Law, Kincho H.; Elgamal, Ahmed

    2004-10-01

    This paper presents the computational procedures and solution strategy employed in ParCYCLIC, a parallel non-linear finite element program developed based on an existing serial code CYCLIC for the analysis of cyclic seismically-induced liquefaction problems. In ParCYCLIC, finite elements are employed within an incremental plasticity, coupled solid-fluid formulation. A constitutive model developed for simulating liquefaction-induced deformations is a main component of this analysis framework. The elements of the computational strategy, designed for distributed-memory message-passing parallel computer systems, include: (a) an automatic domain decomposer to partition the finite element mesh; (b) nodal ordering strategies to minimize storage space for the matrix coefficients; (c) an efficient scheme for the allocation of sparse matrix coefficients among the processors; and (d) a parallel sparse direct solver. Application of ParCYCLIC to simulate 3-D geotechnical experimental models is demonstrated. The computational results show excellent parallel performance and scalability of ParCYCLIC on parallel computers with a large number of processors. Copyright

  4. Learning the Lexical Aspects of a Second Language at Different Proficiencies: A Neural Computational Study

    ERIC Educational Resources Information Center

    Cuppini, Cristiano; Magosso, Elisa; Ursino, Mauro

    2013-01-01

    We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a…

  5. Computing forces on interface elements exerted by dislocations in an elastically anisotropic crystalline material

    NASA Astrophysics Data System (ADS)

    Liu, B.; Arsenlis, A.; Aubry, S.

    2016-06-01

    Driven by the growing interest in numerical simulations of dislocation–interface interactions in general crystalline materials with elastic anisotropy, we develop algorithms for the integration of interface tractions needed to couple dislocation dynamics with a finite element or boundary element solver. The dislocation stress fields in elastically anisotropic media are made analytically accessible through the spherical harmonics expansion of the derivative of Green’s function, and analytical expressions for the forces on interface elements are derived by analytically integrating the spherical harmonics series recursively. Compared with numerical integration by Gaussian quadrature, the newly developed analytical algorithm for interface traction integration is highly beneficial in terms of both computation precision and speed.

  6. Self-Consistent Large-Scale Magnetosphere-Ionosphere Coupling: Computational Aspects and Experiments

    NASA Technical Reports Server (NTRS)

    Newman, Timothy S.

    2003-01-01

    Both external and internal phenomena impact the terrestrial magnetosphere. For example, solar wind and particle precipitation effect the distribution of hot plasma in the magnetosphere. Numerous models exist to describe different aspects of magnetosphere characteristics. For example, Tsyganenko has developed a series of models (e.g., [TSYG89]) that describe the magnetic field, and Stern [STER75] and Volland [VOLL73] have developed an analytical model that describes the convection electric field. Over the past several years, NASA colleague Khazanov, working with Fok and others, has developed a large-scale coupled model that tracks particle flow to determine hot ion and electron phase space densities in the magnetosphere. This model utilizes external data such as solar wind densities and velocities and geomagnetic indices (e.g., Kp) to drive computational processes that evaluate magnetic, electric field, and plasma sheet models at any time point. These models are coupled such that energetic ion and electron fluxes are produced, with those fluxes capable of interacting with the electric field model. A diagrammatic representation of the coupled model is shown.

  7. 3D parallel computations of turbofan noise propagation using a spectral element method

    NASA Astrophysics Data System (ADS)

    Taghaddosi, Farzad

    2006-12-01

    A three-dimensional code has been developed for the simulation of tone noise generated by turbofan engine inlets using computational aeroacoustics. The governing equations are the linearized Euler equations, which are further simplified to a set of equations in terms of acoustic potential, using the irrotational flow assumption, and subsequently solved in the frequency domain. Due to the special nature of acoustic wave propagation, the spatial discretization is performed using a spectral element method, where a tensor product of the nth-degree polynomials based on Chebyshev orthogonal functions is used to approximate variations within hexahedral elements. Non-reflecting boundary conditions are imposed at the far-field using a damping layer concept. This is done by augmenting the continuity equation with an additional term without modifying the governing equations as in PML methods. Solution of the linear system of equations for the acoustic problem is based on the Schur complement method, which is a nonoverlapping domain decomposition technique. The Schur matrix is first solved using a matrix-free iterative method, whose convergence is accelerated with a novel local preconditioner. The solution in the entire domain is then obtained by finding solutions in smaller subdomains. The 3D code also contains a mean flow solver based on the full potential equation in order to take into account the effects of flow variations around the nacelle on the scattering of the radiated sound field. All aspects of numerical simulations, including building and assembling the coefficient matrices, implementation of the Schur complement method, and solution of the system of equations for both the acoustic and mean flow problems are performed on multiprocessors in parallel using the resources of the CLUMEQ Supercomputer Center. A large number of test cases are presented, ranging in size from 100 000-2 000 000 unknowns for which, depending on the size of the problem, between 8-48 CPU's are

  8. A FORTRAN computer code for calculating flows in multiple-blade-element cascades

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1985-01-01

    A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.

  9. A Computational and Experimental Study of Nonlinear Aspects of Induced Drag

    NASA Technical Reports Server (NTRS)

    Smith, Stephen C.

    1996-01-01

    Despite the 80-year history of classical wing theory, considerable research has recently been directed toward planform and wake effects on induced drag. Nonlinear interactions between the trailing wake and the wing offer the possibility of reducing drag. The nonlinear effect of compressibility on induced drag characteristics may also influence wing design. This thesis deals with the prediction of these nonlinear aspects of induced drag and ways to exploit them. A potential benefit of only a few percent of the drag represents a large fuel savings for the world's commercial transport fleet. Computational methods must be applied carefully to obtain accurate induced drag predictions. Trefftz-plane drag integration is far more reliable than surface pressure integration, but is very sensitive to the accuracy of the force-free wake model. The practical use of Trefftz plane drag integration was extended to transonic flow with the Tranair full-potential code. The induced drag characteristics of a typical transport wing were studied with Tranair, a full-potential method, and A502, a high-order linear panel method to investigate changes in lift distribution and span efficiency due to compressibility. Modeling the force-free wake is a nonlinear problem, even when the flow governing equation is linear. A novel method was developed for computing the force-free wake shape. This hybrid wake-relaxation scheme couples the well-behaved nature of the discrete vortex wake with viscous-core modeling and the high-accuracy velocity prediction of the high-order panel method. The hybrid scheme produced converged wake shapes that allowed accurate Trefftz-plane integration. An unusual split-tip wing concept was studied for exploiting nonlinear wake interaction to reduced induced drag. This design exhibits significant nonlinear interactions between the wing and wake that produced a 12% reduction in induced drag compared to an equivalent elliptical wing at a lift coefficient of 0.7. The

  10. Interactive computer graphic surface modeling of three-dimensional solid domains for boundary element analysis

    NASA Technical Reports Server (NTRS)

    Perucchio, R.; Ingraffea, A. R.

    1984-01-01

    The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.

  11. COYOTE: a finite-element computer program for nonlinear heat-conduction problems

    SciTech Connect

    Gartling, D.K.

    1982-10-01

    COYOTE is a finite element computer program designed for the solution of two-dimensional, nonlinear heat conduction problems. The theoretical and mathematical basis used to develop the code is described. Program capabilities and complete user instructions are presented. Several example problems are described in detail to demonstrate the use of the program.

  12. Automatic data generation scheme for finite-element method /FEDGE/ - Computer program

    NASA Technical Reports Server (NTRS)

    Akyuz, F.

    1970-01-01

    Algorithm provides for automatic input data preparation for the analysis of continuous domains in the fields of structural analysis, heat transfer, and fluid mechanics. The computer program utilizes the natural coordinate systems concept and the finite element method for data generation.

  13. Spectral element computation of high-frequency leaky modes in three-dimensional solid waveguides

    NASA Astrophysics Data System (ADS)

    Treyssède, F.

    2016-06-01

    A numerical method is proposed to compute high-frequency low-leakage modes in structural waveguides surrounded by infinite solid media. In order to model arbitrary shape structures, a waveguide formulation is used, which consists of applying to the elastodynamic equilibrium equations a space Fourier transform along the waveguide axis and then a discretization method to the cross-section coordinates. However several numerical issues must be faced related to the unbounded nature of the cross-section, the number of degrees of freedom required to achieve an acceptable error in the high-frequency regime as well as the number of modes to compute. In this paper, these issues are circumvented by applying perfectly matched layers (PML) along the cross-section directions, a high-order spectral element method for the discretization of the cross-section, and an eigensolver shift suited for the computation of low-leakage modes. First, computations are performed for an embedded cylindrical bar, for which literature results are available. The proposed PML waveguide formulation yields good agreement with literature results, even in the case of weak impedance contrast. Its performance with high-order spectral elements is assessed in terms of convergence and accuracy and compared to traditional low-order finite elements. Then, computations are performed for an embedded square bar. Dispersion curves exhibit strong similarities with cylinders. These results show that the properties of low-leakage modes observed in cylindrical bars can also occur in other types of geometry.

  14. Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)

    2000-01-01

    The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.

  15. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  16. Experimental and Computational Investigation of Lift-Enhancing Tabs on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1996-01-01

    An experimental and computational investigation of the effect of lift-enhancing tabs on a two-element airfoil has been conducted. The objective of the study was to develop an understanding of the flow physics associated with lift-enhancing tabs on a multi-element airfoil. An NACA 63(2)-215 ModB airfoil with a 30% chord fowler flap was tested in the NASA Ames 7- by 10-Foot Wind Tunnel. Lift-enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. A combination of tabs located at the main element and flap trailing edges increased the airfoil lift coefficient by 11% relative to the highest lift coefficient achieved by any baseline configuration at an angle of attack of 0 deg, and C(sub 1max) was increased by 3%. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computed results predicted all of the trends observed in the experimental data quite well. In addition, a simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift-enhancing tabs work. The tabs were modeled by a point vortex at the air-foil or flap trailing edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift-enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.

  17. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  18. Applications of Parallel Computation in Micro-Mechanics and Finite Element Method

    NASA Technical Reports Server (NTRS)

    Tan, Hui-Qian

    1996-01-01

    This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.

  19. COYOTE II - a finite element computer program for nonlinear heat conduction problems. Part I - theoretical background

    SciTech Connect

    Gartling, D.K.; Hogan, R.E.

    1994-10-01

    The theoretical and numerical background for the finite element computer program, COYOTE II, is presented in detail. COYOTE II is designed for the multi-dimensional analysis of nonlinear heat conduction problems and other types of diffusion problems. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in COYOTE II are also outlined. Instructions for use of the code are documented in SAND94-1179; examples of problems analyzed with the code are provided in SAND94-1180.

  20. Level set discrete element method for three-dimensional computations with triaxial case study

    NASA Astrophysics Data System (ADS)

    Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.

    2016-06-01

    In this paper, we outline the level set discrete element method (LS-DEM) which is a discrete element method variant able to simulate systems of particles with arbitrary shape using level set functions as a geometric basis. This unique formulation allows seamless interfacing with level set-based characterization methods as well as computational ease in contact calculations. We then apply LS-DEM to simulate two virtual triaxial specimens generated from XRCT images of experiments and demonstrate LS-DEM's ability to quantitatively capture and predict stress-strain and volume-strain behavior observed in the experiments.

  1. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  2. NACHOS 2: A finite element computer program for incompressible flow problems. Part 1: Theoretical background

    NASA Astrophysics Data System (ADS)

    Gartling, D. K.

    1987-04-01

    The theoretical and numerical background for the finite element computer program, NACHOS 2, is presented in detail. The NACHOS 2 code is designed for the two-dimensional analysis of viscous incompressible fluid flows, including the effects of heat transfer and/or other transport processes. A general description of the boundary value problems treated by the program is presented. The finite element formulations and the associated numerical methods used in the NACHOS 2 code are also outlined. Instructions for use of the program are documented in SAND-86-1817; examples of problems analyzed by the code are provided in SAND-86-1818.

  3. MAPVAR - A Computer Program to Transfer Solution Data Between Finite Element Meshes

    SciTech Connect

    Wellman, G.W.

    1999-03-01

    MAPVAR, as was the case with its precursor programs, MERLIN and MERLIN II, is designed to transfer solution results from one finite element mesh to another. MAPVAR draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR are described. User instructions are presented. Example problems are included to demonstrate the operation of the code and the effects of various input options.

  4. Report of a Workshop on the Pedagogical Aspects of Computational Thinking

    ERIC Educational Resources Information Center

    National Academies Press, 2011

    2011-01-01

    In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking…

  5. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.

  6. Finite element analysis and computer graphics visualization of flow around pitching and plunging airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.

    1973-01-01

    A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.

  7. Program design by a multidisciplinary team. [for structural finite element analysis on STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Voigt, S.

    1975-01-01

    The use of software engineering aids in the design of a structural finite-element analysis computer program for the STAR-100 computer is described. Nested functional diagrams to aid in communication among design team members were used, and a standardized specification format to describe modules designed by various members was adopted. This is a report of current work in which use of the functional diagrams provided continuity and helped resolve some of the problems arising in this long-running part-time project.

  8. A partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computer

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  9. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  10. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  11. Efficient computation of Hamiltonian matrix elements between non-orthogonal Slater determinants

    NASA Astrophysics Data System (ADS)

    Utsuno, Yutaka; Shimizu, Noritaka; Otsuka, Takaharu; Abe, Takashi

    2013-01-01

    We present an efficient numerical method for computing Hamiltonian matrix elements between non-orthogonal Slater determinants, focusing on the most time-consuming component of the calculation that involves a sparse array. In the usual case where many matrix elements should be calculated, this computation can be transformed into a multiplication of dense matrices. It is demonstrated that the present method based on the matrix-matrix multiplication attains ˜80% of the theoretical peak performance measured on systems equipped with modern microprocessors, a factor of 5-10 better than the normal method using indirectly indexed arrays to treat a sparse array. The reason for such different performances is discussed from the viewpoint of memory access.

  12. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  13. An accurate quadrature technique for the contact boundary in 3D finite element computations

    NASA Astrophysics Data System (ADS)

    Duong, Thang X.; Sauer, Roger A.

    2015-01-01

    This paper presents a new numerical integration technique for 3D contact finite element implementations, focusing on a remedy for the inaccurate integration due to discontinuities at the boundary of contact surfaces. The method is based on the adaptive refinement of the integration domain along the boundary of the contact surface, and is accordingly denoted RBQ for refined boundary quadrature. It can be used for common element types of any order, e.g. Lagrange, NURBS, or T-Spline elements. In terms of both computational speed and accuracy, RBQ exhibits great advantages over a naive increase of the number of quadrature points. Also, the RBQ method is shown to remain accurate for large deformations. Furthermore, since the sharp boundary of the contact surface is determined, it can be used for various purposes like the accurate post-processing of the contact pressure. Several examples are presented to illustrate the new technique.

  14. Poisson Green's function method for increased computational efficiency in numerical calculations of Coulomb coupling elements

    NASA Astrophysics Data System (ADS)

    Zimmermann, Anke; Kuhn, Sandra; Richter, Marten

    2016-01-01

    Often, the calculation of Coulomb coupling elements for quantum dynamical treatments, e.g., in cluster or correlation expansion schemes, requires the evaluation of a six dimensional spatial integral. Therefore, it represents a significant limiting factor in quantum mechanical calculations. If the size or the complexity of the investigated system increases, many coupling elements need to be determined. The resulting computational constraints require an efficient method for a fast numerical calculation of the Coulomb coupling. We present a computational method to reduce the numerical complexity by decreasing the number of spatial integrals for arbitrary geometries. We use a Green's function formulation of the Coulomb coupling and introduce a generalized scalar potential as solution of a generalized Poisson equation with a generalized charge density as the inhomogeneity. That enables a fast calculation of Coulomb coupling elements and, additionally, a straightforward inclusion of boundary conditions and arbitrarily spatially dependent dielectrics through the Coulomb Green's function. Particularly, if many coupling elements are included, the presented method, which is not restricted to specific symmetries of the model, presents a promising approach for increasing the efficiency of numerical calculations of the Coulomb interaction. To demonstrate the wide range of applications, we calculate internanostructure couplings, such as the Förster coupling, and illustrate the inclusion of symmetry considerations in the method for the Coulomb coupling between bound quantum dot states and unbound continuum states.

  15. Computational micromechanical analysis of the representative volume element of bituminous composite materials

    NASA Astrophysics Data System (ADS)

    Ozer, Hasan; Ghauch, Ziad G.; Dhasmana, Heena; Al-Qadi, Imad L.

    2016-03-01

    Micromechanical computational modeling is used in this study to determine the smallest domain, or Representative Volume Element (RVE), that can be used to characterize the effective properties of composite materials such as Asphalt Concrete (AC). Computational Finite Element (FE) micromechanical modeling was coupled with digital image analysis of surface scans of AC specimens. Three mixtures with varying Nominal Maximum Aggregate Size (NMAS) of 4.75 mm, 12.5 mm, and 25 mm, were prepared for digital image analysis and computational micromechanical modeling. The effects of window size and phase modulus mismatch on the apparent viscoelastic response of the composite were numerically examined. A good agreement was observed in the RVE size predictions based on micromechanical computational modeling and image analysis. Micromechanical results indicated that a degradation in the matrix stiffness increases the corresponding RVE size. Statistical homogeneity was observed for window sizes equal to two to three times the NMAS. A model was presented for relating the degree of statistical homogeneity associated with each window size for materials with varying inclusion dimensions.

  16. Computational micromechanical analysis of the representative volume element of bituminous composite materials

    NASA Astrophysics Data System (ADS)

    Ozer, Hasan; Ghauch, Ziad G.; Dhasmana, Heena; Al-Qadi, Imad L.

    2016-08-01

    Micromechanical computational modeling is used in this study to determine the smallest domain, or Representative Volume Element (RVE), that can be used to characterize the effective properties of composite materials such as Asphalt Concrete (AC). Computational Finite Element (FE) micromechanical modeling was coupled with digital image analysis of surface scans of AC specimens. Three mixtures with varying Nominal Maximum Aggregate Size (NMAS) of 4.75 mm, 12.5 mm, and 25 mm, were prepared for digital image analysis and computational micromechanical modeling. The effects of window size and phase modulus mismatch on the apparent viscoelastic response of the composite were numerically examined. A good agreement was observed in the RVE size predictions based on micromechanical computational modeling and image analysis. Micromechanical results indicated that a degradation in the matrix stiffness increases the corresponding RVE size. Statistical homogeneity was observed for window sizes equal to two to three times the NMAS. A model was presented for relating the degree of statistical homogeneity associated with each window size for materials with varying inclusion dimensions.

  17. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  18. Experimental and computational investigation of lift-enhancing tabs on a multi-element airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale

    1996-01-01

    An experimental and computational investigation of the effect of lift enhancing tabs on a two-element airfoil was conducted. The objective of the study was to develop an understanding of the flow physics associated with lift enhancing tabs on a multi-element airfoil. A NACA 63(sub 2)-215 ModB airfoil with a 30 percent chord Fowler flap was tested in the NASA Ames 7 by 10 foot wind tunnel. Lift enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computer results predict all of the trends in the experimental data quite well. When the flow over the flap upper surface is attached, tabs mounted at the main element trailing edge (cove tabs) produce very little change in lift. At high flap deflections. however, the flow over the flap is separated and cove tabs produce large increases in lift and corresponding reductions in drag by eliminating the separated flow. Cove tabs permit high flap deflection angles to be achieved and reduce the sensitivity of the airfoil lift to the size of the flap gap. Tabs attached to the flap training edge (flap tabs) are effective at increasing lift without significantly increasing drag. A combination of a cove tab and a flap tab increased the airfoil lift coefficient by 11 percent relative to the highest lift tab coefficient achieved by any baseline configuration at an angle of attack of zero percent and the maximum lift coefficient was increased by more than 3 percent. A simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift enhancing tabs work. The tabs were modeled by a point vortex at the training edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift enhancing tabs on a multi-element airfoil. Results of the modeling

  19. Fiber pushout test: A three-dimensional finite element computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Chamis, Christos C.

    1990-01-01

    A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.

  20. Fiber pushout test - A three-dimensional finite element computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Chamis, Christos C.

    1991-01-01

    A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computational very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictioal stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.

  1. Computation of consistent boundary quantities in finite element thermal-fluid solutions

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.

    1982-01-01

    The consistent boundary quantity method for computing derived quantities from finite element nodal variable solutions is investigated. The method calculates consistent, continuous boundary surface quantities such as heat fluxes, flow velocities, and surface tractions from nodal variables such as temperatures, velocity potentials, and displacements. Consistent and lumped coefficient matrix solutions for such problems are compared. The consistent approach may produce more accurate boundary quantities, but spurious oscillations may be produced in the vicinity of discontinuities. The uncoupled computations of the lumped approach provide greater flexibility in dealing with discontinuities and provide increased computational efficiency. The consistent boundary quantity approach can be applied to solution boundaries other than those with Dirichlet boundary conditions, and provides more accurate results than the customary method of differentiation of interpolation polynomials.

  2. Computational Analysis of Enhanced Magnetic Bioseparation in Microfluidic Systems with Flow-Invasive Magnetic Elements

    PubMed Central

    Khashan, S. A.; Alazzam, A.; Furlani, E. P.

    2014-01-01

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437

  3. Computations of Disturbance Amplification Behind Isolated Roughness Elements and Comparison with Measurements

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Li, Fei; Bynum, Michael; Kegerise, Michael; King, Rudolph

    2015-01-01

    Computations are performed to study laminar-turbulent transition due to isolated roughness elements in boundary layers at Mach 3.5 and 5.95, with an emphasis on flow configurations for which experimental measurements from low disturbance wind tunnels are available. The Mach 3.5 case corresponds to a roughness element with right-triangle planform with hypotenuse that is inclined at 45 degrees with respect to the oncoming stream, presenting an obstacle with spanwise asymmetry. The Mach 5.95 case corresponds to a circular roughness element along the nozzle wall of the Purdue BAMQT wind tunnel facility. In both cases, the mean flow distortion due to the roughness element is characterized by long-lived streamwise streaks in the roughness wake, which can support instability modes that did not exist in the absence of the roughness element. The linear amplification characteristics of the wake flow are examined towards the eventual goal of developing linear growth correlations for the onset of transition.

  4. Parallel Computing of Multi-scale Finite Element Sheet Forming Analyses Based on Crystallographic Homogenization Method

    SciTech Connect

    Kuramae, Hiroyuki; Okada, Kenji; Uetsuji, Yasutomo; Nakamachi, Eiji; Tam, Nguyen Ngoc; Nakamura, Yasunori

    2005-08-05

    Since the multi-scale finite element analysis (FEA) requires large computation time, development of the parallel computing technique for the multi-scale analysis is inevitable. A parallel elastic/crystalline viscoplastic FEA code based on a crystallographic homogenization method has been developed using PC cluster. The homogenization scheme is introduced to compute macro-continuum plastic deformations and material properties by considering a polycrystal texture. Since the dynamic explicit method is applied to this method, the analysis using micro crystal structures computes the homogenized stresses in parallel based on domain partitioning of macro-continuum without solving simultaneous linear equations. The micro-structure is defined by the Scanning Electron Microscope (SEM) and the Electron Back Scan Diffraction (EBSD) measurement based crystal orientations. In order to improve parallel performance of elastoplasticity analysis, which dynamically and partially increases computational costs during the analysis, a dynamic workload balancing technique is introduced to the parallel analysis. The technique, which is an automatic task distribution method, is realized by adaptation of subdomain size for macro-continuum to maintain the computational load balancing among cluster nodes. The analysis code is applied to estimate the polycrystalline sheet metal formability.

  5. Methodological aspects of using IBM and macintosh PC'S for computational experiments in the physics practicum

    NASA Astrophysics Data System (ADS)

    Starodubtsev, V. A.; Malyutin, V. M.; Chernov, I. P.

    1996-07-01

    This article considers attempts to develop and use, in the teaching process, computer-laboratory work performed by students in ternimal-based classes. We describe the methodological features of the LABPK1 and LABPK2 programs, which are intended for use on local networks using 386/286 IBM PC compatibles or Macintosh computers.

  6. Analytical calculation of the lower bound on timing resolution for PET scintillation detectors comprising high-aspect-ratio crystal elements.

    PubMed

    Cates, Joshua W; Vinke, Ruud; Levin, Craig S

    2015-07-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector's timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3 × 3 × 20 mm(3) LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162 ± 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  7. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  8. On-Board Computing Subsystem for MIRAX: Architectural and Interface Aspects

    SciTech Connect

    Santiago, Valdivino

    2006-06-09

    This paper presents some proposals of architecture and interfaces among the different types of processing units of MIRAX on-board computing subsystem. MIRAX satellite payload is composed of dedicated computers, two Hard X-Ray cameras and one Soft X-Ray camera (WFC flight spare unit from BeppoSAX satellite). The architectures for the On-Board Computing Subsystem will take into account hardware or software solution of the event preprocessing for CdZnTe detectors. Hardware and software interfaces approaches will be shown and also requirements of on-board memory storage and telemetry will be addressed.

  9. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    SciTech Connect

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  10. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 2

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    The Control/Structures Integration Program, a survey of available software for control of flexible structures, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software are discussed.

  11. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  12. Computing the Average Square: An Agent-Based Introduction to Aspects of Current Psychometric Practice

    ERIC Educational Resources Information Center

    Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe

    2011-01-01

    This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…

  13. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  14. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  15. Quantitative Computed Tomography Protocols Affect Material Mapping and Quantitative Computed Tomography-Based Finite-Element Analysis Predicted Stiffness.

    PubMed

    Giambini, Hugo; Dragomir-Daescu, Dan; Nassr, Ahmad; Yaszemski, Michael J; Zhao, Chunfeng

    2016-09-01

    Quantitative computed tomography-based finite-element analysis (QCT/FEA) has become increasingly popular in an attempt to understand and possibly reduce vertebral fracture risk. It is known that scanning acquisition settings affect Hounsfield units (HU) of the CT voxels. Material properties assignments in QCT/FEA, relating HU to Young's modulus, are performed by applying empirical equations. The purpose of this study was to evaluate the effect of QCT scanning protocols on predicted stiffness values from finite-element models. One fresh frozen cadaveric torso and a QCT calibration phantom were scanned six times varying voltage and current and reconstructed to obtain a total of 12 sets of images. Five vertebrae from the torso were experimentally tested to obtain stiffness values. QCT/FEA models of the five vertebrae were developed for the 12 image data resulting in a total of 60 models. Predicted stiffness was compared to the experimental values. The highest percent difference in stiffness was approximately 480% (80 kVp, 110 mAs, U70), while the lowest outcome was ∼1% (80 kVp, 110 mAs, U30). There was a clear distinction between reconstruction kernels in predicted outcomes, whereas voltage did not present a clear influence on results. The potential of QCT/FEA as an improvement to conventional fracture risk prediction tools is well established. However, it is important to establish research protocols that can lead to results that can be translated to the clinical setting. PMID:27428281

  16. Computer modeling of single-cell and multicell thermionic fuel elements

    SciTech Connect

    Dickinson, J.W.; Klein, A.C.

    1996-05-01

    Modeling efforts are undertaken to perform coupled thermal-hydraulic and thermionic analysis for both single-cell and multicell thermionic fuel elements (TFE). The analysis--and the resulting MCTFE computer code (multicell thermionic fuel element)--is a steady-state finite volume model specifically designed to analyze cylindrical TFEs. It employs an interactive successive overrelaxation solution technique to solve for the temperatures throughout the TFE and a coupled thermionic routine to determine the total TFE performance. The calculated results include temperature distributions in all regions of the TFE, axial interelectrode voltages and current densities, and total TFE electrical output parameters including power, current, and voltage. MCTFE-generated results compare experimental data from the single-cell Topaz-II-type TFE and multicell data from the General Atomics 3H5 TFE to benchmark the accuracy of the code methods.

  17. SAGUARO: A finite-element computer program for partially saturated porous flow problems

    NASA Astrophysics Data System (ADS)

    Easton, R. R.; Gartling, D. K.; Larson, D. E.

    1983-11-01

    SAGUARO is finite element computer program designed to calculate two-dimensional flow of mass and energy through porous media. The media may be saturated or partially saturated. SAGUARO solves the parabolic time-dependent mass transport equation which accounts for the presence of partially saturated zones through the use of highly non-linear material characteristic curves. The energy equation accounts for the possibility of partially saturated regions by adjusting the thermal capacitances and thermal conductivities according to the volume fraction of water present in the local pores. Program capabilities, user instructions and a sample problem are presented in this manual.

  18. Some aspects of optimal human-computer symbiosis in multisensor geospatial data fusion

    NASA Astrophysics Data System (ADS)

    Levin, E.; Sergeyev, A.

    Nowadays vast amount of the available geospatial data provides additional opportunities for the targeting accuracy increase due to possibility of geospatial data fusion. One of the most obvious operations is determining of the targets 3D shapes and geospatial positions based on overlapped 2D imagery and sensor modeling. 3D models allows for the extraction of such information about targets, which cannot be measured directly based on single non-fused imagery. Paper describes ongoing research effort at Michigan Tech attempting to combine advantages of human analysts and computer automated processing for efficient human computer symbiosis for geospatial data fusion. Specifically, capabilities provided by integration into geospatial targeting interfaces novel human-computer interaction method such as eye-tracking and EEG was explored. Paper describes research performed and results in more details.

  19. Estimation of the physico-chemical parameters of materials based on rare earth elements with the application of computational model

    NASA Astrophysics Data System (ADS)

    Mamaev, K.; Obkhodsky, A.; Popov, A.

    2016-01-01

    Computational model, technique and the basic principles of operation program complex for quantum-chemical calculations of material's physico-chemical parameters with rare earth elements are discussed. The calculating system is scalable and includes CPU and GPU computational resources. Control and operation of computational jobs and also Globus Toolkit 5 software provides the possibility to join computer users in a unified system of data processing with peer-to-peer architecture. CUDA software is used to integrate graphic processors into calculation system.

  20. Symbolic algorithms for the computation of Moshinsky brackets and nuclear matrix elements

    NASA Astrophysics Data System (ADS)

    Ursescu, D.; Tomaselli, M.; Kuehl, T.; Fritzsche, S.

    2005-12-01

    To facilitate the use of the extended nuclear shell model (NSM), a FERMI module for calculating some of its basic quantities in the framework of MAPLE is provided. The Moshinsky brackets, the matrix elements for several central and non-central interactions between nuclear two-particle states as well as their expansion in terms of Talmi integrals are easily given within a symbolic formulation. All of these quantities are available for interactive work. Program summaryTitle of program:Fermi Catalogue identifier:ADVO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVO Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:None Computer for which the program is designed and others on which is has been tested:All computers with a licence for the computer algebra package MAPLE [Maple is a registered trademark of Waterloo Maple Inc., produced by MapleSoft division of Waterloo Maple Inc.] Instalations:GSI-Darmstadt; University of Kassel (Germany) Operating systems or monitors under which the program has beentested: WindowsXP, Linux 2.4 Programming language used:MAPLE 8 and 9.5 from MapleSoft division of Waterloo Maple Inc. Memory required to execute with typical data:30 MB No. of lines in distributed program including test data etc.:5742 No. of bytes in distributed program including test data etc.:288 939 Distribution program:tar.gz Nature of the physical problem:In order to perform calculations within the nuclear shell model (NSM), a quick and reliable access to the nuclear matrix elements is required. These matrix elements, which arise from various types of forces among the nucleons, can be calculated using Moshinsky's transformation brackets between relative and center-of-mass coordinates [T.A. Brody, M. Moshinsky, Tables of Transformation Brackets, Monografias del Instituto de Fisica, Universidad Nacional Autonoma de Mexico, 1960] and by the proper use of the nuclear states in different coupling notations

  1. Computing interaural differences through finite element modeling of idealized human heads

    PubMed Central

    Cai, Tingli; Rakerd, Brad; Hartmann, William M.

    2015-01-01

    Acoustical interaural differences were computed for a succession of idealized shapes approximating the human head-related anatomy: sphere, ellipsoid, and ellipsoid with neck and torso. Calculations were done as a function of frequency (100–2500 Hz) and for source azimuths from 10 to 90 degrees using finite element models. The computations were compared to free-field measurements made with a manikin. Compared to a spherical head, the ellipsoid produced greater large-scale variation with frequency in both interaural time differences and interaural level differences, resulting in better agreement with the measurements. Adding a torso, represented either as a large plate or as a rectangular box below the neck, further improved the agreement by adding smaller-scale frequency variation. The comparisons permitted conjectures about the relationship between details of interaural differences and gross features of the human anatomy, such as the height of the head, and length of the neck. PMID:26428792

  2. Parallel Computations of Natural Convection Flow in a Tall Cavity Using an Explicit Finite Element Method

    SciTech Connect

    Dunn, T.A.; McCallen, R.C.

    2000-10-17

    The Galerkin Finite Element Method was used to predict a natural convection flow in an enclosed cavity. The problem considered was a differentially heated, tall (8:1), rectangular cavity with a Rayleigh number of 3.4 x 10{sup 5} and Prandtl number of 0.71. The incompressible Navier-Stokes equations were solved using a Boussinesq approximation for the buoyancy force. The algorithm was developed for efficient use on massively parallel computer systems. Emphasis was on time-accurate simulations. It was found that the average temperature and velocity values can be captured with a relatively coarse grid, while the oscillation amplitude and period appear to be grid sensitive and require a refined computation.

  3. SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects

    NASA Technical Reports Server (NTRS)

    Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M

    1998-01-01

    SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.

  4. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.; Kutler, Paul (Technical Monitor)

    1994-01-01

    In recent years significant advances have been made for parallel computers in both hardware and software. Now parallel computers have become viable tools in computational mechanics. Many application codes developed on conventional computers have been modified to benefit from parallel computers. Significant speedups in some areas have been achieved by parallel computations. For single-discipline use of both fluid dynamics and structural dynamics, computations have been made on wing-body configurations using parallel computers. However, only a limited amount of work has been completed in combining these two disciplines for multidisciplinary applications. The prime reason is the increased level of complication associated with a multidisciplinary approach. In this work, procedures to compute aeroelasticity on parallel computers using direct coupling of fluid and structural equations will be investigated for wing-body configurations. The parallel computer selected for computations is an Intel iPSC/860 computer which is a distributed-memory, multiple-instruction, multiple data (MIMD) computer with 128 processors. In this study, the computational efficiency issues of parallel integration of both fluid and structural equations will be investigated in detail. The fluid and structural domains will be modeled using finite-difference and finite-element approaches, respectively. Results from the parallel computer will be compared with those from the conventional computers using a single processor. This study will provide an efficient computational tool for the aeroelastic analysis of wing-body structures on MIMD type parallel computers.

  5. Theoretical and Computational Aspects of the Magnetic Confinement of Particles and Plasmas

    NASA Astrophysics Data System (ADS)

    Mehanian, Courosh

    1987-09-01

    This thesis covers various aspects of the magnetic confinement of particles and plasmas. It is composed of two separate problems which deal with two extreme limits of temperature. In the first problem, the setting is a device that is a candidate for a fusion reactor and thus represents a collection of ionized atoms at a very high temperature. The second problem concerns the magnetic confinement of a neutral hydrogen gas at a temperature low enough that a Bose-Einstein condensation occurs. The tilt stabilization of a spheromak by an energetic particle ring is analyzed. A comprehensive survey is made of numerically generated, hybrid equilibria which describe spheromak plasmas with an energetic ion ring component. Unlike the analytic treatments, neither the ion ring toroidal current nor the inverse aspect ration are required to be small. The tilt stability of the plasma is determined by calculating the torque due to the magnetic interaction with the ion-ring, assumed fixed. The tilt stability of the ring is determined by calculating the betatron frequencies of the ring particles. Bicycle-tire rings, since they flatten the separatix axially, provide the most stabilization of the plasma per unit ion ring current. On the other hand, axially elongated, toilet-paper-tube rings are themselves the most stable. These opposing trends indicate that the configuration with optimal stability is achieved near an ion ring aspect ratio of unity and for roughly equal plasma and fast particle currents. The confinement of an atomic hydrogen gas in the trap formed by a time-varying magnetic field is investigated. The trap uses the interaction of the magnetic field with the magnetic moments of the atoms, which are kept aligned by a strong uniform field. The effect of collisions is included via a Monte Carlo algorithm and it is found that the atoms can be confined when the frequency and the current of the coils producing the time-varying field are appropriately chosen.

  6. Tying Theory To Practice: Cognitive Aspects of Computer Interaction in the Design Process.

    ERIC Educational Resources Information Center

    Mikovec, Amy E.; Dake, Dennis M.

    The new medium of computer-aided design requires changes to the creative problem-solving methodologies typically employed in the development of new visual designs. Most theoretical models of creative problem-solving suggest a linear progression from preparation and incubation to some type of evaluative study of the "inspiration." These models give…

  7. A comparison of turbulence models in computing multi-element airfoil flows

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.

    1994-01-01

    Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.

  8. 2nd International Symposium on Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering (REES-2015)

    NASA Astrophysics Data System (ADS)

    Tavadyan, Levon, Prof; Sachkov, Viktor, Prof; Godymchuk, Anna, Dr.; Bogdan, Anna

    2016-01-01

    The 2nd International Symposium «Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering» (REES2015) was jointly organized by Tomsk State University (Russia), National Academy of Science (Armenia), Shenyang Polytechnic University (China), Moscow Institute of Physics and Engineering (Russia), Siberian Physical-technical Institute (Russia), and Tomsk Polytechnic University (Russia) in September, 7-15, 2015, Belokuriha, Russia. The Symposium provided a high quality of presentations and gathered engineers, scientists, academicians, and young researchers working in the field of rare and rare earth elements mining, modification, separation, elaboration and application, in order to facilitate aggregation and sharing interests and results for a better collaboration and activity visibility. The goal of the REES2015 was to bring researchers and practitioners together to share the latest knowledge on rare and rare earth elements technologies. The Symposium was aimed at presenting new trends in rare and rare earth elements mining, research and separation and recent achievements in advanced materials elaboration and developments for different purposes, as well as strengthening the already existing contacts between manufactures, highly-qualified specialists and young scientists. The topics of the REES2015 were: (1) Problems of extraction and separation of rare and rare earth elements; (2) Methods and approaches to the separation and isolation of rare and rare earth elements with ultra-high purity; (3) Industrial technologies of production and separation of rare and rare earth elements; (4) Economic aspects in technology of rare and rare earth elements; and (5) Rare and rare earth based materials (application in metallurgy, catalysis, medicine, optoelectronics, etc.). We want to thank the Organizing Committee, the Universities and Sponsors supporting the Symposium, and everyone who contributed to the organization of the event and to

  9. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  10. A linear-scaling spectral-element method for computing electrostatic potentials.

    PubMed

    Watson, Mark A; Hirao, Kimihiko

    2008-11-14

    A new linear-scaling method is presented for the fast numerical evaluation of the electronic Coulomb potential. Our approach uses a simple real-space partitioning of the system into cubic cells and a spectral-element representation of the density in a tensorial basis of high-order Chebyshev polynomials. Electrostatic interactions between non-neighboring cells are described using the fast multipole method. The remaining near-field interactions are computed in the tensorial basis as a sum of differential contributions by exploiting the numerical low-rank separability of the Coulomb operator. The method is applicable to arbitrary charge densities, avoids the Poisson equation, and does not involve the solution of any systems of linear equations. Above all, an adaptive resolution of the Chebyshev basis in each cell facilitates the accurate and efficient treatment of molecular systems. We demonstrate the performance of our implementation for quantum chemistry with benchmark calculations on the noble gas atoms, long-chain alkanes, and diamond fragments. We conclude that the spectral-element method can be a competitive tool for the accurate computation of electrostatic potentials in large-scale molecular systems. PMID:19045386

  11. Computational aspects of crack growth in sandwich plates from reinforced concrete and foam

    NASA Astrophysics Data System (ADS)

    Papakaliatakis, G.; Panoskaltsis, V. P.; Liontas, A.

    2012-12-01

    In this work we study the initiation and propagation of cracks in sandwich plates made from reinforced concrete in the boundaries and from a foam polymeric material in the core. A nonlinear finite element approach is followed. Concrete is modeled as an elastoplastic material with its tensile behavior and damage taken into account. Foam is modeled as a crushable, isotropic compressible material. We analyze slabs with a pre-existing macro crack at the position of the maximum bending moment and we study the macrocrack propagation, as well as the condition under which we have crack arrest.

  12. Computational aspects of hot-wire identification of thermal conductivity and diffusivity under high temperature

    NASA Astrophysics Data System (ADS)

    Vala, Jiří; Jarošová, Petra

    2016-07-01

    Development of advanced materials resistant to high temperature, needed namely for the design of heat storage for low-energy and passive buildings, requires simple, inexpensive and reliable methods of identification of their temperature-sensitive thermal conductivity and diffusivity, covering both well-advised experimental setting and implementation of robust and effective computational algorithms. Special geometrical configurations offer a possibility of quasi-analytical evaluation of temperature development for direct problems, whereas inverse problems of simultaneous evaluation of thermal conductivity and diffusivity must be handled carefully, using some least-squares (minimum variance) arguments. This paper demonstrates the proper mathematical and computational approach to such model problem, thanks to the radial symmetry of hot-wire measurements, including its numerical implementation.

  13. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  14. Delta: An object-oriented finite element code architecture for massively parallel computers

    SciTech Connect

    Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.

    1996-02-01

    Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.

  15. Adaptive finite element simulation of flow and transport applications on parallel computers

    NASA Astrophysics Data System (ADS)

    Kirk, Benjamin Shelton

    The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the

  16. An improved method for the automatic mapping of computed tomography numbers onto finite element models.

    PubMed

    Taddei, Fulvia; Pancanti, Alberto; Viceconti, Marco

    2004-01-01

    The assignment of bone tissue material properties is a fundamental step in the generation of subject-specific finite element models from computed tomography data. Aim of the present work is to investigate the influence of the material mapping algorithm on the results predicted by the finite element analysis. Two models, a coarse and a refined one, of a human ileum, femur and tibia, were generated from CT data and used for the tests. In addition a convergence analysis was carried out for the femur model, using six refinement levels, to verify whether the inclusion of the material properties would significantly alter the convergence behaviour of the mesh. The results showed that the choice of the mapping algorithm influences the material distribution. However, this did not always propagate into the finite element results. The difference between the maximum Von Mises stress remained always lower than 10%, apart one case when it reached the 13%. However, the global behaviour of the meshes showed more marked differences between the two algorithms: in the finer meshes of the two long bones 20-30% of the bone volume showed differences in the predicted Von Mises stresses greater than 10%. The convergence behaviour of the model was not worsened by the introduction of inhomogeneous material properties. The software was made available in the public domain. PMID:14644599

  17. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different

  18. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  19. Methodological aspects of in vitro assessment of bio-accessible risk element pool in urban particulate matter.

    PubMed

    Sysalová, Jiřina; Száková, Jiřina; Tremlová, Jana; Kašparovská, Kateřina; Kotlík, Bohumil; Tlustoš, Pavel; Svoboda, Petr

    2014-11-01

    In vitro tests simulating the elements release from inhaled urban particulate matter (PM) with artificial lung fluids (Gamble's and Hatch's solutions) and simulated gastric and pancreatic solutions were applied for an estimation of hazardous element (As, Cd, Cr, Hg, Mn, Ni, Pb and Zn) bio-accessibility in this material. An inductively coupled plasma optical emission spectrometry (ICP-OES) and an inductively coupled plasma mass spectrometry (ICP-MS) were employed for the element determination in extracted solutions. The effect of the extraction agent used, extraction time, sample-to-extractant ratio, sample particle size and/or individual element properties was evaluated. Different patterns of individual elements were observed, comparing Hatch's solution vs. simulated gastric and pancreatic solutions. For Hatch's solution, a decreasing sample-to-extractant ratio in a PM size fraction of <0.063 mm resulted in increasing leached contents of all investigated elements. As already proved for other operationally defined extraction procedures, the extractable element portions are affected not only by their mobility in the particulate matter itself but also by the sample preparation procedure. Results of simulated in vitro tests can be applied for the reasonable estimation of bio-accessible element portions in the particulate matter as an alternative method, which, consequently, initiates further examinations including potential in vivo assessments. PMID:25123460

  20. Precise Boundary Element Computation of Protein Transport Properties: Diffusion Tensors, Specific Volume, and Hydration

    PubMed Central

    Aragon, Sergio; Hahn, David K.

    2006-01-01

    A precise boundary element method for the computation of hydrodynamic properties has been applied to the study of a large suite of 41 soluble proteins ranging from 6.5 to 377 kDa in molecular mass. A hydrodynamic model consisting of a rigid protein excluded volume, obtained from crystallographic coordinates, surrounded by a uniform hydration thickness has been found to yield properties in excellent agreement with experiment. The hydration thickness was determined to be δ = 1.1 ± 0.1 Å. Using this value, standard deviations from experimental measurements are: 2% for the specific volume; 2% for the translational diffusion coefficient, and 6% for the rotational diffusion coefficient. These deviations are comparable to experimental errors in these properties. The precision of the boundary element method allows the unified description of all of these properties with a single hydration parameter, thus far not achieved with other methods. An approximate method for computing transport properties with a statistical precision of 1% or better (compared to 0.1–0.2% for the full computation) is also presented. We have also estimated the total amount of hydration water with a typical −9% deviation from experiment in the case of monomeric proteins. Both the water of hydration and the more precise translational diffusion data hint that some multimeric proteins may not have the same solution structure as that in the crystal because the deviations are systematic and larger than in the monomeric case. On the other hand, the data for monomeric proteins conclusively show that there is no difference in the protein structure going from the crystal into solution. PMID:16714342

  1. Some computational aspects of the hals (harmonic analysis of x-ray line shape) method

    SciTech Connect

    Moshkina, T.I.; Nakhmanson, M.S.

    1986-02-01

    This paper discusses the problem of distinguishing the analytical line from the background and approximates the background component. One of the constituent parts of the program package in the procedural-mathematical software for x-ray investigations of polycrystalline substances in application to the DRON-3, DRON-2 and ADP-1 diffractometers is the SSF system of programs, which is designed for determining the parameters of the substructure of materials. The SSF system is tailored not only to Unified Series (ES) computers, but also to the M-6000 and SM-1 minicomputers.

  2. Intelligent computer-aided diagnosis system for breast MRI combining kinetic and morphological aspects

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Meyer-Bäse, Anke; Lange, Oliver

    2008-04-01

    An intelligent medical systems based on a radial basis neural network is applied to the automatic classification of suspicious lesions in breast MRI and compared with two standard mammographic reading methods. Such systems represent an important component of future sophisticated computer-aided diagnosis systems and enable the extraction of spatial and temporal features of dynamic MRI data stemming from patients with confirmed lesion diagnosis. Intelligent medical systems combining both kinetics and lesions' morphology are expected to have substantial implications in healthcare politics by contributing to the diagnosis of indeterminate breast lesions by non-invasive imaging.

  3. Computer-Aided Drug Design (CADD): Methodological Aspects and Practical Applications in Cancer Research

    NASA Astrophysics Data System (ADS)

    Gianti, Eleonora

    Computer-Aided Drug Design (CADD) has deservedly gained increasing popularity in modern drug discovery (Schneider, G.; Fechner, U. 2005), whether applied to academic basic research or the pharmaceutical industry pipeline. In this work, after reviewing theoretical advancements in CADD, we integrated novel and stateof- the-art methods to assist in the design of small-molecule inhibitors of current cancer drug targets, specifically: Androgen Receptor (AR), a nuclear hormone receptor required for carcinogenesis of Prostate Cancer (PCa); Signal Transducer and Activator of Transcription 5 (STAT5), implicated in PCa progression; and Epstein-Barr Nuclear Antigen-1 (EBNA1), essential to the Epstein Barr Virus (EBV) during latent infections. Androgen Receptor. With the aim of generating binding mode hypotheses for a class (Handratta, V.D. et al. 2005) of dual AR/CYP17 inhibitors (CYP17 is a key enzyme for androgens biosynthesis and therefore implicated in PCa development), we successfully implemented a receptor-based computational strategy based on flexible receptor docking (Gianti, E.; Zauhar, R.J. 2012). Then, with the ultimate goal of identifying novel AR binders, we performed Virtual Screening (VS) by Fragment-Based Shape Signatures, an improved version of the original method developed in our Laboratory (Zauhar, R.J. et al. 2003), and we used the results to fully assess the high-level performance of this innovative tool in computational chemistry. STAT5. The SRC Homology 2 (SH2) domain of STAT5 is responsible for phospho-peptide recognition and activation. As a keystone of Structure-Based Drug Design (SBDD), we characterized key residues responsible for binding. We also generated a model of STAT5 receptor bound to a phospho-peptide ligand, which was validated by docking publicly known STAT5 inhibitors. Then, we performed Shape Signatures- and docking-based VS of the ZINC database (zinc.docking.org), followed by Molecular Mechanics Generalized Born Surface Area (MMGBSA

  4. Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Houck, J. A.

    1976-01-01

    A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.

  5. Computational analysis of noise reduction devices in axial fans with stabilized finite element formulations

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Sheard, A. G.; Tezduyar, T. E.

    2012-12-01

    The paper illustrates how a computational fluid mechanic technique, based on stabilized finite element formulations, can be used in analysis of noise reduction devices in axial fans. Among the noise control alternatives, the study focuses on the use of end-plates fitted at the blade tips to control the leakage flow and the related aeroacoustic sources. The end-plate shape is configured to govern the momentum transfer to the swirling flow at the blade tip. This flow control mechanism has been found to have a positive link to the fan aeroacoustics. The complex physics of the swirling flow at the tip, developing under the influence of the end-plate, is governed by the rolling up of the jet-like leakage flow. The RANS modelling used in the computations is based on the streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin methods, supplemented with the DRDJ stabilization. Judicious determination of the stabilization parameters involved is also a part of our computational technique and is described for each component of the stabilized formulation. We describe the flow physics underlying the design of the noise control device and illustrate the aerodynamic performance. Then we investigate the numerical performance of the formulation by analysing the inner workings of the stabilization operators and of their interaction with the turbulence model.

  6. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

    SciTech Connect

    Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.

    1996-09-01

    This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

  7. FEMAX finite-element package for computing three-dimensional time-domain electromagnetic fields in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Mur, G.

    An efficient and accurate finite-element package is described for computing transient as well as time-harmonic three-dimensional electromagnetic fields in inhomogeneous media. For the expansion of the field in an inhomogeneous configuration, edge elements are used along the interfaces between media with different medium properties to allow for the continuity conditions of the field across these interfaces, nodal elements are used in the remaining homogeneous subdomains. In the domain of computation the package decides locally what type of element has to be used for obtaining the user-specified accuracy of modeling the field. In this way optimum results are obtained both in regard to computational efficiency and in regard to desired accuracy. The electromagnetic compatibility relations are implemented for avoiding spurious solutions.

  8. Use of SNP-arrays for ChIP assays: computational aspects.

    PubMed

    Muro, Enrique M; McCann, Jennifer A; Rudnicki, Michael A; Andrade-Navarro, Miguel A

    2009-01-01

    The simultaneous genotyping of thousands of single nucleotide polymorphisms (SNPs) in a genome using SNP-Arrays is a very important tool that is revolutionizing genetics and molecular biology. We expanded the utility of this technique by using it following chromatin immunoprecipitation (ChIP) to assess the multiple genomic locations protected by a protein complex recognized by an antibody. The power of this technique is illustrated through an analysis of the changes in histone H4 acetylation, a marker of open chromatin and transcriptionally active genomic regions, which occur during differentiation of human myoblasts into myotubes. The findings have been validated by the observation of a significant correlation between the detected histone modifications and the expression of the nearby genes, as measured by DNA expression microarrays. This chapter focuses on the computational analysis of the data. PMID:19588091

  9. Three-Dimensional Effects in Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; LeeReusch, Elizabeth M.; Watson, Ralph D.

    2003-01-01

    In an effort to discover the causes for disagreement between previous two-dimensional (2-D) computations and nominally 2-D experiment for flow over the three-element McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, documents venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side-wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using three-dimensional (3-D) structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects on the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of an off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too early or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower the lift levels near maximum lift conditions.

  10. A study of equation solvers for linear and non-linear finite element analysis on parallel processing computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Kamat, Manohar P.

    1992-01-01

    Concurrent computing environments provide the means to achieve very high performance for finite element analysis of systems, provided the algorithms take advantage of multiple processors. The authors have examined several algorithms for both linear and nonlinear finite element analysis. The performance of these algorithms on an Alliant FX/80 parallel supercomputer has been studied. For single load case linear analysis, the optimal solution algorithm is strongly problem dependent. For multiple load cases or nonlinear analysis through a modified Newton-Raphson method, decomposition algorithms are shown to have a decided advantage over element-by-element preconditioned conjugate gradient algorithms.