Science.gov

Sample records for element computational aspects

  1. On current aspects of finite element computational fluid mechanics for turbulent flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1982-01-01

    A set of nonlinear partial differential equations suitable for the description of a class of turbulent three-dimensional flow fields in select geometries is identified. On the basis of the concept of enforcing a penalty constraint to ensure accurate accounting of ordering effects, a finite element numerical solution algorithm is established for the equation set and the theoretical aspects of accuracy, convergence and stability are identified and quantized. Hypermatrix constructions are used to formulate the reduction of the computational aspects of the theory to practice. The robustness of the algorithm, and the computer program embodiment, have been verified for pertinent flow configurations.

  2. Adaptive finite elements with high aspect ratio for the computation of coalescence using a phase-field model

    NASA Astrophysics Data System (ADS)

    Burman, E.; Jacot, A.; Picasso, M.

    2004-03-01

    A multiphase-field model for the description of coalescence in a binary alloy is solved numerically using adaptive finite elements with high aspect ratio. The unknown of the multiphase-field model are the three phase fields (solid phase 1, solid phase 2, and liquid phase), a Lagrange multiplier and the concentration field. An Euler implicit scheme is used for time discretization, together with continuous, piecewise linear finite elements. At each time step, a linear system corresponding to the three phases plus the Lagrange multiplier has to be solved. Then, the linear system pertaining to concentration is solved. An adaptive finite element algorithm is proposed. In order to reduce the number of mesh vertices, the generated meshes contain elements with high aspect ratio. The refinement and coarsening criteria are based on an error indicator which has already been justified theoretically for simpler problems. Numerical results on two test cases show the efficiency of the method.

  3. Computational aspects of seismology

    NASA Astrophysics Data System (ADS)

    Koper, Keith David

    Recent increases in computer speed and memory have opened the door to new analytical techniques in seismology. This dissertation focuses on the application of two such techniques: finite difference simulation of wave propagation in complex media, and genetic algorithm (GA) based searching for solutions to inverse problems. The first two chapters detail the use of a 3D finite difference algorithm in modeling the P- and S-wave velocity structure of the Tonga subduction zone. The large memory capacity of modern computers permits the use of a fine spatial grid, allowing for the accurate comparison of subtly varying velocity models. I contrast the theoretical traveltimes to local data that were recorded by two temporary deployments of broadband, land stations and ocean bottom seismometers. The primary results from these studies are: (1) it is not possible to distinguish between equilibrium and metastable models of subduction with travel time data, and (2) the same mechanism accounts for the fast, slab velocity anomaly and the slow, backarc velocity anomaly under the Lau spreading center---both are consistent with temperature perturbations, indicating that the role of partial melt is insignificant. The third and fourth chapters concern the application of GAs to two kinds of seismological inverse problems. The relatively fast speed of present day CPUs allows global search methods, such as GAs, to be feasible on realistic problems. In the third chapter I compare the performance of a GA based search with those of a series of more traditional, local descent methods on the problem of inverting PKP travel times for radial, P-wave models of the Earth's core and lowermost mantle. Even though both the model parametrization and dataset are heavily smoothed, there exist significant complexities in the error landscape (due to nonlinearities in the forward calculation) that render the GA method superior. In the fourth chapter I present a variant of a traditional GA, known as a

  4. Computational aspects of multibody dynamics

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1989-01-01

    Computational aspects are addressed which impact the requirements for developing a next generation software system for flexible multibody dynamics simulation which include: criteria for selecting candidate formulation, pairing of formulations with appropriate solution procedures, need for concurrent algorithms to utilize computer hardware advances, and provisions for allowing open-ended yet modular analysis modules.

  5. Computational aspects of dispersive computational continua for elastic heterogeneous media

    NASA Astrophysics Data System (ADS)

    Fafalis, Dimitrios; Fish, Jacob

    2015-12-01

    The present manuscript focusses on computational aspects of dispersive computational continua (C^2) formulation previously introduced by the authors. The dispersive C^2 formulation is a multiscale approach that showed strikingly accurate dispersion curves. However, the seemingly theoretical advantage may be inconsequential due to tremendous computational cost involved. Unlike classical dispersive methods pioneered more than a half a century ago where the unit cell is quasi-static and provides effective mechanical and dispersive properties to the coarse-scale problem, the dispersive C^2 gives rise to transient problems at all scales and for all microphases involved. An efficient block time-integration scheme is proposed that takes advantage of the fact that the transient unit cell problems are not coupled to each other, but rather to a single coarse-scale finite element they are positioned in. We show that the computational cost of the method is comparable to the classical dispersive methods for short load durations.

  6. Elements of Computer Careers.

    ERIC Educational Resources Information Center

    Edwards, Judith B.; And Others

    This textbook is intended to provide students with an awareness of the possible alternatives in the computer field and with the background information necessary for them to evaluate those alternatives intelligently. Problem solving and simulated work experiences are emphasized as students are familiarized with the functions and limitations of…

  7. Finite element computational fluid mechanics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1983-01-01

    Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.

  8. Element-topology-independent preconditioners for parallel finite element computations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  9. Theoretical and computational aspects of seismic tomography

    NASA Astrophysics Data System (ADS)

    Alekseev, A. S.; Lavrentiev, M. M.; Romanov, V. G.; Romanov, M. E.

    1990-12-01

    This paper reviews aspects related to applications of seismic wave kinematics for the reconstruction of internal characteristics of an elastic medium. It presents the results of studying the inverse kinematic seismic problem and its linear analogue — problems of integral geometry, obtained in recent decades with an emphasis on the work done by Soviet scientists. Computational techniques of solving these problems are discussed. This review should be of interest to geophysicists studying the oceans, atmosphere and ionosphere as well as those studying the solid part of the Earth.

  10. Conceptual aspects of geometric quantum computation

    NASA Astrophysics Data System (ADS)

    Sjöqvist, Erik; Azimi Mousolou, Vahid; Canali, Carlo M.

    2016-07-01

    Geometric quantum computation is the idea that geometric phases can be used to implement quantum gates, i.e., the basic elements of the Boolean network that forms a quantum computer. Although originally thought to be limited to adiabatic evolution, controlled by slowly changing parameters, this form of quantum computation can as well be realized at high speed by using nonadiabatic schemes. Recent advances in quantum gate technology have allowed for experimental demonstrations of different types of geometric gates in adiabatic and nonadiabatic evolution. Here, we address some conceptual issues that arise in the realizations of geometric gates. We examine the appearance of dynamical phases in quantum evolution and point out that not all dynamical phases need to be compensated for in geometric quantum computation. We delineate the relation between Abelian and non-Abelian geometric gates and find an explicit physical example where the two types of gates coincide. We identify differences and similarities between adiabatic and nonadiabatic realizations of quantum computation based on non-Abelian geometric phases.

  11. Algebraic aspects of the computably enumerable degrees.

    PubMed Central

    Slaman, T A; Soare, R I

    1995-01-01

    A set A of nonnegative integers is computably enumerable (c.e.), also called recursively enumerable (r.e.), if there is a computable method to list its elements. The class of sets B which contain the same information as A under Turing computability (elements, whether every embedding of P into can be extended to an embedding of Q into R. Many of the most significant theorems giving an algebraic insight into R have asserted either extension or nonextension of embeddings. We extend and unify these results and their proofs to produce complete and complementary criteria and techniques to analyze instances of extension and nonextension. We conclude that the full extension of embedding problem is decidable. PMID:11607508

  12. Computer Security: The Human Element.

    ERIC Educational Resources Information Center

    Guynes, Carl S.; Vanacek, Michael T.

    1981-01-01

    The security and effectiveness of a computer system are dependent on the personnel involved. Improved personnel and organizational procedures can significantly reduce the potential for computer fraud. (Author/MLF)

  13. Mathematical aspects of finite element methods for incompressible viscous flows

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.

    1986-01-01

    Mathematical aspects of finite element methods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.

  14. Impact of new computing systems on finite element computations

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  15. Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Sawamiphakdi, K.

    1984-01-01

    A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.

  16. Computational and Practical Aspects of Drug Repositioning

    PubMed Central

    Oprea, Tudor I.

    2015-01-01

    Abstract The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the “barrier to entry” is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme—drug repositioning evidence level (DREL)—for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209

  17. Dedicated breast computed tomography: Basic aspects

    SciTech Connect

    Sarno, Antonio; Mettivier, Giovanni Russo, Paolo

    2015-06-15

    X-ray mammography of the compressed breast is well recognized as the “gold standard” for early detection of breast cancer, but its performance is not ideal. One limitation of screening mammography is tissue superposition, particularly for dense breasts. Since 2001, several research groups in the USA and in the European Union have developed computed tomography (CT) systems with digital detector technology dedicated to x-ray imaging of the uncompressed breast (breast CT or BCT) for breast cancer screening and diagnosis. This CT technology—tracing back to initial studies in the 1970s—allows some of the limitations of mammography to be overcome, keeping the levels of radiation dose to the radiosensitive breast glandular tissue similar to that of two-view mammography for the same breast size and composition. This paper presents an evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R and D in the laboratory and/or in the clinic. The intended focus here is on the technological/engineering aspects of BCT and on outlining advantages and limitations as reported in the related literature. Prospects for future research in this field are discussed.

  18. Computational and Practical Aspects of Drug Repositioning.

    PubMed

    Oprea, Tudor I; Overington, John P

    2015-01-01

    The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the "barrier to entry" is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme-drug repositioning evidence level (DREL)-for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209

  19. Computational Aspects of Heat Transfer in Structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M. (Compiler)

    1982-01-01

    Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.

  20. Sociocultural Aspects of Computers in Education.

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    The data reported in this paper gives depth to the picture of computers in society, in work, and in schools. The prices have dropped but computer corporations sell to schools, as they do to any other customer, to increase profits for themselves. Computerizing is a vehicle for social stratification. Computers are not easy to use and are hard to…

  1. Aspects of computer vision in surgical endoscopy

    NASA Astrophysics Data System (ADS)

    Rodin, Vincent; Ayache, Alain; Berreni, N.

    1993-09-01

    This work is related to a project of medical robotics applied to surgical endoscopy, led in collaboration with Doctor Berreni from the Saint Roch nursing-home in Perpignan, France). After taking what Doctor Berreni advises, two aspects of endoscopic color image processing have been brought out: (1) The help to the diagnosis by the automatic detection of the sick areas after a learning phase. (2) The 3D reconstruction of the analyzed cavity by using a zoom.

  2. [Fascioliasis hepatis--computed tomographic aspect].

    PubMed

    Goebel, N; Markwalder, K; Siegenthaler, W

    1984-12-01

    In a patient with liver fascioliasis (already excreting eggs with the faeces) a CT scan of the liver showed after i. v. contrast injection a relatively characteristic aspect with multiple, small, hypodense areas, partly in formations of bunches of grapes, partly in a street-like arrangement towards the portal vein - bile duct - areas. 9 months later the hypodense lesions had markedly decreased. PMID:6518725

  3. Computational aspects of Gaussian beam migration

    SciTech Connect

    Hale, D.

    1992-08-01

    The computational efficiency of Gaussian beam migration depends on the solution of two problems: (1) computation of complex-valued beam times and amplitudes in Cartesian (x,z) coordinates, and (2) limiting computations to only those (x,z) coordinates within a region where beam amplitudes are significant. The first problem can be reduced to a particular instance of a class of closest-point problems in computational geometry, for which efficient solutions, such as the Delaunay triangulation, are well known. Delaunay triangulation of sampled points along a ray enables the efficient location of that point on the raypath that is closest to any point (x,z) at which beam times and amplitudes are required. Although Delaunay triangulation provides an efficient solution to this closest point problem, a simpler solution, also presented in this paper, may be sufficient and more easily extended for use in 3-D Gaussian beam migration. The second problem is easily solved by decomposing the subsurface image into a coarse grid of square cells. Within each cell, simple and efficient loops over (x,z) coordinates may be used. Because the region in which beam amplitudes are significant may be difficult to represent with simple loops over (x,z) coordinates, I use recursion to move from cell to cell, until entire region defined by the beam has been covered. Benchmark tests of a computer program implementing these solutions suggest that the cost of Gaussian hewn migration is comparable to that of migration via explicit depth extrapolation in the frequency-space domain. For the data sizes and computer programs tested here, the explicit method was faster. However, as data size was increased, the computation time for Gaussian beam migration grew more slowly than that for the explicit method.

  4. Computational aspects of Gaussian beam migration

    SciTech Connect

    Hale, D.

    1992-01-01

    The computational efficiency of Gaussian beam migration depends on the solution of two problems: (1) computation of complex-valued beam times and amplitudes in Cartesian (x,z) coordinates, and (2) limiting computations to only those (x,z) coordinates within a region where beam amplitudes are significant. The first problem can be reduced to a particular instance of a class of closest-point problems in computational geometry, for which efficient solutions, such as the Delaunay triangulation, are well known. Delaunay triangulation of sampled points along a ray enables the efficient location of that point on the raypath that is closest to any point (x,z) at which beam times and amplitudes are required. Although Delaunay triangulation provides an efficient solution to this closest point problem, a simpler solution, also presented in this paper, may be sufficient and more easily extended for use in 3-D Gaussian beam migration. The second problem is easily solved by decomposing the subsurface image into a coarse grid of square cells. Within each cell, simple and efficient loops over (x,z) coordinates may be used. Because the region in which beam amplitudes are significant may be difficult to represent with simple loops over (x,z) coordinates, I use recursion to move from cell to cell, until entire region defined by the beam has been covered. Benchmark tests of a computer program implementing these solutions suggest that the cost of Gaussian hewn migration is comparable to that of migration via explicit depth extrapolation in the frequency-space domain. For the data sizes and computer programs tested here, the explicit method was faster. However, as data size was increased, the computation time for Gaussian beam migration grew more slowly than that for the explicit method.

  5. Central control element expands computer capability

    NASA Technical Reports Server (NTRS)

    Easton, R. A.

    1975-01-01

    Redundant processing and multiprocessing modes can be obtained from one computer by using logic configuration. Configuration serves as central control element which can automatically alternate between high-capacity multiprocessing mode and high-reliability redundant mode using dynamic mode switching in real time.

  6. Analytical and Computational Aspects of Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Bilevel problem formulations have received considerable attention as an approach to multidisciplinary optimization in engineering. We examine the analytical and computational properties of one such approach, collaborative optimization. The resulting system-level optimization problems suffer from inherent computational difficulties due to the bilevel nature of the method. Most notably, it is impossible to characterize and hence identify solutions of the system-level problems because the standard first-order conditions for solutions of constrained optimization problems do not hold. The analytical features of the system-level problem make it difficult to apply conventional nonlinear programming algorithms. Simple examples illustrate the analysis and the algorithmic consequences for optimization methods. We conclude with additional observations on the practical implications of the analytical and computational properties of collaborative optimization.

  7. Some Theoretical Aspects for Elastic Wave Modeling in a Recently Developed Spectral Element Method

    NASA Astrophysics Data System (ADS)

    Wang, X. M.; Seriani, G.; Lin, W. J.

    2006-10-01

    A spectral element method has been recently developed for solving elastodynamic problems. The numerical solutions are obtained by using the weak formulation of the elastodynamic equation for heterogeneous media and by the Galerkin approach applied to a partition, in small subdomains, of the original physical domain under investigation. In the present work some mathematical aspects of the method and of the associated algorithm implementation are systematically investigated. Two kinds of orthogonal basis functions, constructed with Legendre and Chebyshev polynomials, and their related Gauss-Lobbatto collocation points, used in reference element quadrature, are introduced. The related analytical integration formulas are obtained. The standard error estimations and expansion convergence are discussed. In order to improve the computation accuracy and efficiency, an element-by-element pre-conditioned conjugate gradient linear solver in the space domain and a staggered predictor/multi-corrector algorithm in the time integration are used for strong heterogeneous elastic media. As a consequence neither the global matrices, nor the effective force vector is assembled. When analytical formula are used for the element quadrature, there is even no need for forming element matrix in order to further save memory without loosing much in computational efficiency. The element-by-element algorithm uses an optimal tensor product scheme which makes spectral element methods much more efficient than finite-element methods from the point of view of both memory storage and computational time requirements. This work is divided into two parts. The second part will give the algorithm implementation, numerical accuracy and efficiency analyses, and then the modelling example comparison of the proposed spectral element method with a conventional finite-element method and a staggered pseudo-spectral method that is to be reported in the other work.

  8. Finite element computation with parallel VLSI

    NASA Technical Reports Server (NTRS)

    Mcgregor, J.; Salama, M.

    1983-01-01

    This paper describes a parallel processing computer consisting of a 16-bit microcomputer as a master processor which controls and coordinates the activities of 8086/8087 VLSI chip set slave processors working in parallel. The hardware is inexpensive and can be flexibly configured and programmed to perform various functions. This makes it a useful research tool for the development of, and experimentation with parallel mathematical algorithms. Application of the hardware to computational tasks involved in the finite element analysis method is demonstrated by the generation and assembly of beam finite element stiffness matrices. A number of possible schemes for the implementation of N-elements on N- or n-processors (N is greater than n) are described, and the speedup factors of their time consumption are determined as a function of the number of available parallel processors.

  9. GOCE Satellite Orbit in a Computational Aspect

    NASA Astrophysics Data System (ADS)

    Bobojc, Andrzej; Drozyner, Andrzej

    2013-04-01

    The presented work plays an important role in research of possibility of the Gravity Field and Steady-State Ocean Circulation Explorer Mission (GOCE) satellite orbit improvement using a combination of satellite to satellite tracking high-low (SST- hl) observations and gravity gradient tensor (GGT) measurements. The orbit improvement process will be started from a computed orbit, which should be close to a reference ("true") orbit as much as possible. To realize this objective, various variants of GOCE orbit were generated by means of the Torun Orbit Processor (TOP) software package. The TOP software is based on the Cowell 8th order numerical integration method. This package computes a satellite orbit in the field of gravitational and non-gravitational forces (including the relativistic and empirical accelerations). The three sets of 1-day orbital arcs were computed using selected geopotential models and additional accelerations generated by the Moon, the Sun, the planets, the Earth and ocean tides, the relativity effects. Selected gravity field models include, among other things, the recent models from the GOCE mission and the models such as EIGEN-6S, EIGEN-5S, EIGEN-51C, ITG-GRACE2010S, EGM2008, EGM96. Each set of 1-day orbital arcs corresponds to the GOCE orbit for arbitrary chosen date. The obtained orbits were compared to the GOCE reference orbits (Precise Science Orbits of the GOCE satellite delivered by the European Space Agency) using the root mean squares (RMS) of the differences between the satellite positions in the computed orbits and in the reference ones. These RMS values are a measure of performance of selected geopotential models in terms of GOCE orbit computation. The RMS values are given for the truncated and whole geopotential models. For the three variants with the best fit to the reference orbits, the empirical acceleration models were added to the satellite motion model. It allowed for further improving the fitting of computed orbits to the

  10. Plane Smoothers for Multiblock Grids: Computational Aspects

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane

    1999-01-01

    Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.

  11. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  12. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect

    Rahul Ravindrudu

    2004-12-19

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  13. Computational Aspects of N-Mixture Models

    PubMed Central

    Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S

    2015-01-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629

  14. Physical aspects of computing the flow of a viscous fluid

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1984-01-01

    One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.

  15. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  16. On Undecidability Aspects of Resilient Computations and Implications to Exascale

    SciTech Connect

    Rao, Nageswara S

    2014-01-01

    Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

  17. Vitamins and trace elements: practical aspects of supplementation.

    PubMed

    Berger, Mette M; Shenkin, Alan

    2006-09-01

    The role of micronutrients in parenteral nutrition include the following: (1) Whenever artificial nutrition is indicated, micronutrients, i.e., vitamins and trace elements, should be given from the first day of artificial nutritional support. (2) Testing blood levels of vitamins and trace elements in acutely ill patients is of very limited value. By using sensible clinical judgment, it is possible to manage patients with only a small amount of laboratory testing. (3) Patients with major burns or major trauma and those with acute renal failure who are on continuous renal replacement therapy or dialysis quickly develop acute deficits in some micronutrients, and immediate supplementation is essential. (4) Other groups at risk are cancer patients, but also pregnant women with hyperemesis and people with anorexia nervosa or other malnutrition or malabsorption states. (5) Clinicians need to treat severe deficits before they become clinical deficiencies. If a patient develops a micronutrient deficiency state while in care, then there has been a severe failure of care. (6) In the early acute phase of recovery from critical illness, where artificial nutrition is generally not indicated, there may still be a need to deliver micronutrients to specific categories of very sick patients. (7) Ideally, trace element preparations should provide a low-manganese product for all and a manganese-free product for certain patients with liver disease. (8) High losses through excretion should be minimized by infusing micronutrients slowly, over as long a period as possible. To avoid interactions, it would be ideal to infuse trace elements and vitamins separately: the trace elements over an initial 12-h period and the vitamins over the next 12-h period. (9) Multivitamin and trace element preparations suitable for most patients requiring parenteral nutrition are widely available, but individual patients may require additional supplements or smaller amounts of certain micronutrients

  18. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  19. Critical Elements of Computer Literacy for Teachers.

    ERIC Educational Resources Information Center

    Overbaugh, Richard C.

    A definition of computer literacy is developed that is broad enough to apply to educators in general, but which leaves room for specificity for particular situations and content areas. The following general domains that comprise computer literacy for all educators are addressed: (1) general computer operations; (2) software, including computer…

  20. Element-by-element and implicit-explicit finite element formulations for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.

    1988-01-01

    Preconditioner algorithms to reduce the computational effort in FEM analyses of large-scale fluid-dynamics problems are presented. A general model problem is constructed on the basis of the convection-diffusion equation and the two-dimensional vorticity/stream-function formulation of the Navier-Stokes equations; this problem is then analyzed using element-by-element, implicit-explicit, and adaptive implicit-explicit approximation schemes. Numerical results for the two-dimensional advection and rigid-body rotation of a cosine hill, flow past a circular cylinder, and driven cavity flow are presented in extensive graphs and shown to be in good agreement with those obtained using implicit methods.

  1. Aspects of the major element composition of Halley's dust

    NASA Astrophysics Data System (ADS)

    Jessberger, E. K.; Christoforidis, A.; Kissel, J.

    1988-04-01

    Further attempts to extract chemical information on the solid dust particles of Comet Halley from impact-ionization time-of-flight mass spectrometry are described. Results on average compositions, element groupings, CHON particles, and silicates are discussed. Halley's dust in the vicinity of the Vega-1 spacecraft is found to be a mixture of a refractory organic component and unequilibrated silicon, but detailed chemical information on individual particles is difficult to extract because of the complexity of the impact-ionization process.

  2. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  3. Computers in the Library: The Human Element.

    ERIC Educational Resources Information Center

    Magrath, Lynn L.

    1982-01-01

    Discusses library staff and public reaction to the computerization of library operations at the Pikes Peak Library District in Colorado Springs. An outline of computer applications implemented since the inception of the program in 1975 is included. (EJS)

  4. Cohesive surface model for fracture based on a two-scale formulation: computational implementation aspects

    NASA Astrophysics Data System (ADS)

    Toro, S.; Sánchez, P. J.; Podestá, J. M.; Blanco, P. J.; Huespe, A. E.; Feijóo, R. A.

    2016-07-01

    The paper describes the computational aspects and numerical implementation of a two-scale cohesive surface methodology developed for analyzing fracture in heterogeneous materials with complex micro-structures. This approach can be categorized as a semi-concurrent model using the representative volume element concept. A variational multi-scale formulation of the methodology has been previously presented by the authors. Subsequently, the formulation has been generalized and improved in two aspects: (i) cohesive surfaces have been introduced at both scales of analysis, they are modeled with a strong discontinuity kinematics (new equations describing the insertion of the macro-scale strains, into the micro-scale and the posterior homogenization procedure have been considered); (ii) the computational procedure and numerical implementation have been adapted for this formulation. The first point has been presented elsewhere, and it is summarized here. Instead, the main objective of this paper is to address a rather detailed presentation of the second point. Finite element techniques for modeling cohesive surfaces at both scales of analysis (FE^2 approach) are described: (i) finite elements with embedded strong discontinuities are used for the macro-scale simulation, and (ii) continuum-type finite elements with high aspect ratios, mimicking cohesive surfaces, are adopted for simulating the failure mechanisms at the micro-scale. The methodology is validated through numerical simulation of a quasi-brittle concrete fracture problem. The proposed multi-scale model is capable of unveiling the mechanisms that lead from the material degradation phenomenon at the meso-structural level to the activation and propagation of cohesive surfaces at the structural scale.

  5. Computational aspects in high intensity ultrasonic surgery planning.

    PubMed

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. PMID:19740625

  6. Higher-Order Finite Elements for Computing Thermal Radiation

    NASA Technical Reports Server (NTRS)

    Gould, Dana C.

    2004-01-01

    Two variants of the finite-element method have been developed for use in computational simulations of radiative transfers of heat among diffuse gray surfaces. Both variants involve the use of higher-order finite elements, across which temperatures and radiative quantities are assumed to vary according to certain approximations. In this and other applications, higher-order finite elements are used to increase (relative to classical finite elements, which are assumed to be isothermal) the accuracies of final numerical results without having to refine computational meshes excessively and thereby incur excessive computation times. One of the variants is termed the radiation sub-element (RSE) method, which, itself, is subject to a number of variations. This is the simplest and most straightforward approach to representation of spatially variable surface radiation. Any computer code that, heretofore, could model surface-to-surface radiation can incorporate the RSE method without major modifications. In the basic form of the RSE method, each finite element selected for use in computing radiative heat transfer is considered to be a parent element and is divided into sub-elements for the purpose of solving the surface-to-surface radiation-exchange problem. The sub-elements are then treated as classical finite elements; that is, they are assumed to be isothermal, and their view factors and absorbed heat fluxes are calculated accordingly. The heat fluxes absorbed by the sub-elements are then transferred back to the parent element to obtain a radiative heat flux that varies spatially across the parent element. Variants of the RSE method involve the use of polynomials to interpolate and/or extrapolate to approximate spatial variations of physical quantities. The other variant of the finite-element method is termed the integration method (IM). Unlike in the RSE methods, the parent finite elements are not subdivided into smaller elements, and neither isothermality nor other

  7. Adaptive Finite-Element Computation In Fracture Mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1995-01-01

    Report discusses recent progress in use of solution-adaptive finite-element computational methods to solve two-dimensional problems in linear elastic fracture mechanics. Method also shown extensible to three-dimensional problems.

  8. Optically intraconnected computer employing dynamically reconfigurable holographic optical element

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A. (Inventor)

    1992-01-01

    An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.

  9. Algorithms for computer detection of symmetry elements in molecular systems.

    PubMed

    Beruski, Otávio; Vidal, Luciano N

    2014-02-01

    Simple procedures for the location of proper and improper rotations and reflexion planes are presented. The search is performed with a molecule divided into subsets of symmetrically equivalent atoms (SEA) which are analyzed separately as if they were a single molecule. This approach is advantageous in many aspects. For instance, in those molecules that are symmetric rotors, the number of atoms and the inertia tensor of the SEA provide one straight way to find proper rotations of any order. The algorithms are invariant to the molecular orientation and their computational cost is low, because the main information required to find symmetry elements is interatomic distances and the principal moments of the SEA. For example, our Fortran implementation, running on a single processor, took only a few seconds to locate all 120 symmetry operations of the large and highly symmetrical fullerene C720, belonging to the Ih point group. Finally, we show how the interatomic distances matrix of a slightly unsymmetrical molecule is used to symmetrize its geometry. PMID:24403016

  10. A computer graphics program for general finite element analyses

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Sawyer, L. M.

    1978-01-01

    Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.

  11. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  12. Secular perturbation theory and computation of asteroid proper elements

    NASA Technical Reports Server (NTRS)

    Milani, Andrea; Knezevic, Zoran

    1991-01-01

    A new theory for the calculation of proper elements is presented. This theory defines an explicit algorithm applicable to any chosen set of orbits and accounts for the effect of shallow resonances on secular frequencies. The proper elements are computed with an iterative algorithm and the behavior of the iteration can be used to define a quality code.

  13. Computational aspects of steel fracturing pertinent to naval requirements.

    PubMed

    Matic, Peter; Geltmacher, Andrew; Rath, Bhakta

    2015-03-28

    Modern high strength and ductile steels are a key element of US Navy ship structural technology. The development of these alloys spurred the development of modern structural integrity analysis methods over the past 70 years. Strength and ductility provided the designers and builders of navy surface ships and submarines with the opportunity to reduce ship structural weight, increase hull stiffness, increase damage resistance, improve construction practices and reduce maintenance costs. This paper reviews how analytical and computational tools, driving simulation methods and experimental techniques, were developed to provide ongoing insights into the material, damage and fracture characteristics of these alloys. The need to understand alloy fracture mechanics provided unique motivations to measure and model performance from structural to microstructural scales. This was done while accounting for the highly nonlinear behaviours of both materials and underlying fracture processes. Theoretical methods, data acquisition strategies, computational simulation and scientific imaging were applied to increasingly smaller scales and complex materials phenomena under deformation. Knowledge gained about fracture resistance was used to meet minimum fracture initiation, crack growth and crack arrest characteristics as part of overall structural integrity considerations. PMID:25713445

  14. The finite element machine - An assessment of the impact of parallel computing on future finite element computations

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.

    1986-01-01

    The requirements of complex aerospace vehicles combined with the age of structural analysis systems enhance the need to advance technology toward a new generation of structural analysis capability. Recent and impeding advances in parallel and supercomputers provide the opportunity to significantly improve these structural analysis capabilities for large order finite element problems. Long-term research in parallel computing, associated with the NASA Finite Element Machine project, is discussed. The results show the potential of parallel computers to provide substantial increases in computation speed over sequential computers. Results are given for sample problems in the areas of eigenvalue analysis and transient response.

  15. The Impact of Instructional Elements in Computer-Based Instruction

    ERIC Educational Resources Information Center

    Martin, Florence; Klein, James D.; Sullivan, Howard

    2007-01-01

    This study investigated the effects of several elements of instruction (objectives, information, practice, examples and review) when they were combined in a systematic manner. College students enrolled in a computer literacy course used one of six different versions of a computer-based lesson delivered on the web to learn about input, processing,…

  16. Acceleration of matrix element computations for precision measurements

    SciTech Connect

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.

  17. Introducing the Practical Aspects of Computational Chemistry to Undergraduate Chemistry Students

    ERIC Educational Resources Information Center

    Pearson, Jason K.

    2007-01-01

    Various efforts are being made to introduce the different physical aspects and uses of computational chemistry to the undergraduate chemistry students. A new laboratory approach that demonstrates all such aspects via experiments has been devised for the purpose.

  18. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  19. Finite element dynamic analysis on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lambiotte, J. J., Jr.

    1978-01-01

    Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.

  20. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  1. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  2. Some aspects of the computer simulation of conduction heat transfer and phase change processes

    SciTech Connect

    Solomon, A. D.

    1982-04-01

    Various aspects of phase change processes in materials are discussd including computer modeling, validation of results and sensitivity. In addition, the possible incorporation of cognitive activities in computational heat transfer is examined.

  3. Rad-hard computer elements for space applications

    NASA Technical Reports Server (NTRS)

    Krishnan, G. S.; Longerot, Carl D.; Treece, R. Keith

    1993-01-01

    Space Hardened CMOS computer elements emulating a commercial microcontroller and microprocessor family have been designed, fabricated, qualified, and delivered for a variety of space programs including NASA's multiple launch International Solar-Terrestrial Physics (ISTP) program, Mars Observer, and government and commercial communication satellites. Design techniques and radiation performance of the 1.25 micron feature size products are described.

  4. On the effects of grid ill-conditioning in three dimensional finite element vector potential magnetostatic field computations

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1990-01-01

    The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.

  5. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  6. A computational study of nodal-based tetrahedral element behavior.

    SciTech Connect

    Gullerud, Arne S.

    2010-09-01

    This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.

  7. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  8. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  9. A stochastic method for computing hadronic matrix elements

    DOE PAGESBeta

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Drach, Vincent; Jansen, Karl; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  10. Computational design aspects of a NASP nozzle/afterbody experiment

    NASA Technical Reports Server (NTRS)

    Ruffin, Stephen M.; Venkatapathy, Ethiraj; Keener, Earl R.; Nagaraj, N.

    1989-01-01

    This paper highlights the influence of computational methods on design of a wind tunnel experiment which generically models the nozzle/afterbody flow field of the proposed National Aerospace Plane. The rectangular slot nozzle plume flow field is computed using a three-dimensional, upwind, implicit Navier-Stokes solver. Freestream Mach numbers of 5.3, 7.3, and 10 are investigated. Two-dimensional parametric studies of various Mach numbers, pressure ratios, and ramp angles are used to help determine model loads and afterbody ramp angle and length. It was found that the center of pressure on the ramp occurs at nearly the same location for all ramp angles and test conditions computed. Also, to prevent air liquefaction, it is suggested that a helium-air mixture be used as the jet gas for the highest Mach number test case.

  11. Implicit extrapolation methods for multilevel finite element computations

    SciTech Connect

    Jung, M.; Ruede, U.

    1994-12-31

    The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.

  12. Some Aspects of uncertainty in computational fluid dynamics results

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1991-01-01

    Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.

  13. Huber's M-estimation in relative GPS positioning: computational aspects

    NASA Astrophysics Data System (ADS)

    Chang, X.-W.; Guo, Y.

    2005-08-01

    When GPS signal measurements have outliers, using least squares (LS) estimation is likely to give poor position estimates. One of the typical approaches to handle this problem is to use robust estimation techniques. We study the computational issues of Huber’s M-estimation applied to relative GPS positioning. First for code-based relative positioning, we use simulation results to show that Newton’s method usually converges faster than the iteratively reweighted least squares (IRLS) method, which is often used in geodesy for computing robust estimates of parameters. Then for code- and carrier-phase-based relative positioning, we present a recursive modified Newton method to compute Huber’s M-estimates of the positions. The structures of the model are exploited to make the method efficient, and orthogonal transformations are used to ensure numerical reliability of the method. Economical use of computer memory is also taken into account in designing the method. Simulation results show that the method is effective.

  14. Administrative and Financial Aspects of Computers in Education

    ERIC Educational Resources Information Center

    Rush, James E.

    1970-01-01

    Paper presented at the Education and Information Science Symposium," Sponsored by the Ohio Chapters of the American Society for Information Science in cooperation with The Department of Computer and Information Science, The Ohio State University, June 23 and 24, 1969. (MF)

  15. Technical Aspects of Computer-Assisted Instruction in Chinese.

    ERIC Educational Resources Information Center

    Cheng, Chin-Chaun; Sherwood, Bruce

    1981-01-01

    Computer assisted instruction in Chinese is considered in relation to the design and recognition of Chinese characters, speech synthesis of the standard Chinese language, and the identification of Chinese tone. The PLATO work has shifted its orientation from provision of supplementary courseware to implementation of independent lessons and…

  16. Acceleration of matrix element computations for precision measurements

    DOE PAGESBeta

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix elementmore » technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.« less

  17. Fast computation of the acoustic field for ultrasound elements.

    PubMed

    Güven, H Emre; Miller, Eric L; Cleveland, Robin O

    2009-09-01

    A fast method for computing the acoustic field of ultrasound transducers is presented with application to rectangular elements that are cylindrically focused. No closed-form solutions exist for this case but several numerical techniques have been described in the ultrasound imaging literature. Our motivation is the rapid calculation of imaging kernels for physics-based diagnostic imaging for which current methods are too computationally intensive. Here, the surface integral defining the acoustic field from a baffled piston is converted to a 3-D spatial convolution of the element surface and the Green's function. A 3-D version of the overlap-save method from digital signal processing is employed to obtain a fast computational algorithm based on spatial Fourier transforms. Further efficiency is gained by using a separable approximation to the Green's function through singular value decomposition and increasing the effective sampling rate by polyphase filtering. The tradeoff between accuracy and spatial sampling rate is explored to determine appropriate parameters for a specific transducer. Comparisons with standard tools such as Field II are presented, where nearly 2 orders of magnitude improvement in computation speed is observed for similar accuracy. PMID:19811993

  18. Boundary element analysis on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Kane, J. H.

    1994-01-01

    Boundary element analysis (BEA) can be characterized as a numerical technique that generally shifts the computational burden in the analysis toward numerical integration and the solution of nonsymmetric and either dense or blocked sparse systems of algebraic equations. Researchers have explored the concept that the fundamental characteristics of BEA can be exploited to generate effective implementations on vector and parallel computers. In this paper, the results of some of these investigations are discussed. The performance of overall algorithms for BEA on vector supercomputers, massively data parallel single instruction multiple data (SIMD), and relatively fine grained distributed memory multiple instruction multiple data (MIMD) computer systems is described. Some general trends and conclusions are discussed, along with indications of future developments that may prove fruitful in this regard.

  19. Compute Element and Interface Box for the Hazard Detection System

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.; Spaulding, Matthew D.; Luna, Michael E.; Motaghedi, Shui H.; Trawny, Nikolas; Johnson, Andrew E.; Ivanov, Tonislav I.; Huertas, Andres; Whitaker, William D.; Goldberg, Steven B.

    2013-01-01

    The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.

  20. Continuum mechanical and computational aspects of material behavior

    SciTech Connect

    Fried, Eliot; Gurtin, Morton E.

    2000-02-10

    The focus of the work is the application of continuum mechanics to materials science, specifically to the macroscopic characterization of material behavior at small length scales. The long-term goals are a continuum-mechanical framework for the study of materials that provides a basis for general theories and leads to boundary-value problems of physical relevance, and computational methods appropriate to these problems supplemented by physically meaningful regularizations to aid in their solution. Specific studies include the following: the development of a theory of polycrystalline plasticity that incorporates free energy associated with lattice mismatch between grains; the development of a theory of geometrically necessary dislocations within the context of finite-strain plasticity; the development of a gradient theory for single-crystal plasticity with geometrically necessary dislocations; simulations of dynamical fracture using a theory that allows for the kinking and branching of cracks; computation of segregation and compaction in flowing granular materials.

  1. Computational aspects of sensitivity calculations in linear transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, W. H.; Haftka, R. T.

    1991-01-01

    The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.

  2. Theoretical aspects of light-element alloys under extremely high pressure

    NASA Astrophysics Data System (ADS)

    Feng, Ji

    In this Dissertation, we present theoretical studies on the geometric and electronic structure of light-element alloys under high pressure. The first three Chapters are concerned with specific compounds, namely, SiH 4, CaLi2 and BexLi1- x, and associated structural and electronic phenomena, arising in our computational studies. In the fourth Chapter, we attempt to develop a unified view of the relationship between the electronic and geometric structure of light-element alloys under pressure, by focusing on the states near the Fermi level in these metals.

  3. Computational aspects of the continuum quaternionic wave functions for hydrogen

    SciTech Connect

    Morais, J.

    2014-10-15

    Over the past few years considerable attention has been given to the role played by the Hydrogen Continuum Wave Functions (HCWFs) in quantum theory. The HCWFs arise via the method of separation of variables for the time-independent Schrödinger equation in spherical coordinates. The HCWFs are composed of products of a radial part involving associated Laguerre polynomials multiplied by exponential factors and an angular part that is the spherical harmonics. In the present paper we introduce the continuum wave functions for hydrogen within quaternionic analysis ((R)QHCWFs), a result which is not available in the existing literature. In particular, the underlying functions are of three real variables and take on either values in the reduced and full quaternions (identified, respectively, with R{sup 3} and R{sup 4}). We prove that the (R)QHCWFs are orthonormal to one another. The representation of these functions in terms of the HCWFs are explicitly given, from which several recurrence formulae for fast computer implementations can be derived. A summary of fundamental properties and further computation of the hydrogen-like atom transforms of the (R)QHCWFs are also discussed. We address all the above and explore some basic facts of the arising quaternionic function theory. As an application, we provide the reader with plot simulations that demonstrate the effectiveness of our approach. (R)QHCWFs are new in the literature and have some consequences that are now under investigation.

  4. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  5. Computational and theoretical aspects of biomolecular structure and dynamics

    SciTech Connect

    Garcia, A.E.; Berendzen, J.; Catasti, P., Chen, X.

    1996-09-01

    This is the final report for a project that sought to evaluate and develop theoretical, and computational bases for designing, performing, and analyzing experimental studies in structural biology. Simulations of large biomolecular systems in solution, hydrophobic interactions, and quantum chemical calculations for large systems have been performed. We have developed a code that implements the Fast Multipole Algorithm (FMA) that scales linearly in the number of particles simulated in a large system. New methods have been developed for the analysis of multidimensional NMR data in order to obtain high resolution atomic structures. These methods have been applied to the study of DNA sequences in the human centromere, sequences linked to genetic diseases, and the dynamics and structure of myoglobin.

  6. Behavioral and computational aspects of language and its acquisition

    NASA Astrophysics Data System (ADS)

    Edelman, Shimon; Waterfall, Heidi

    2007-12-01

    One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.

  7. Computational analysis of promoter elements and chromatin features in yeast.

    PubMed

    Wyrick, John J

    2012-01-01

    Regulatory elements in promoter sequences typically function as binding sites for transcription factor proteins and thus are critical determinants of gene transcription. There is growing evidence that chromatin features, such as histone modifications or nucleosome positions, also have important roles in transcriptional regulation. Recent functional genomics and computational studies have yielded extensive datasets cataloging transcription factor binding sites (TFBS) and chromatin features, such as nucleosome positions, throughout the yeast genome. However, much of this data can be difficult to navigate or analyze efficiently. This chapter describes practical methods for the visualization, data mining, and statistical analysis of yeast promoter elements and chromatin features using two Web-accessible bioinformatics databases: ChromatinDB and Ceres. PMID:22113279

  8. Chemical aspects of pellet-cladding interaction in light water reactor fuel elements

    SciTech Connect

    Olander, D.R.

    1982-01-01

    In contrast to the extensive literature on the mechanical aspects of pellet-cladding interaction (PCI) in light water reactor fuel elements, the chemical features of this phenomenon are so poorly understood that there is still disagreement concerning the chemical agent responsible. Since the earliest work by Rosenbaum, Davies and Pon, laboratory and in-reactor experiments designed to elucidate the mechanism of PCI fuel rod failures have concentrated almost exclusively on iodine. The assumption that this is the reponsible chemical agent is contained in models of PCI which have been constructed for incorporation into fuel performance codes. The evidence implicating iodine is circumstantial, being based primarily upon the volatility and significant fission yield of this element and on the microstructural similarity of the failed Zircaloy specimens exposed to iodine in laboratory stress corrosion cracking (SCC) tests to cladding failures by PCI.

  9. SYMBMAT: Symbolic computation of quantum transition matrix elements

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.

    2012-08-01

    We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem

  10. Aspects of Quantum Computing with Polar Paramagnetic Molecules

    NASA Astrophysics Data System (ADS)

    Karra, Mallikarjun; Friedrich, Bretislav

    2015-05-01

    Since the original proposal by DeMille, arrays of optically trapped ultracold polar molecules have been considered among the most promising prototype platforms for the implementation of a quantum computer. The qubit of a molecular array is realized by a single dipolar molecule entangled via its dipole-dipole interaction with the rest of the array's molecules. A superimposed inhomogeneous electric field precludes the quenching of the body-fixed dipole moments by rotation and a time dependent external field controls the qubits to perform gate operations. Much like our previous work in which we considered the simplest cases of a polar 1 Σ and a symmetric top molecule, here we consider a X2Π3 / 2 polar molecule (exemplified by the OH radical) which, by virtue of its nonzero electronic spin and orbital angular momenta, is, in addition, paramagnetic. We demonstrate entanglement tuning by evaluating the concurrence (and the requisite frequencies needed for gate operations) between two such molecules in the presence of varying electric and magnetic fields. Finally, we discuss the conditions required for achieving qubit addressability (transition frequency difference, Δω , as compared with the concomitant Stark and Zeeman broadening) and high fidelity. International Max Planck Research School - Functional Interfaces in Physics and Chemistry.

  11. Massively parallel computation of RCS with finite elements

    NASA Technical Reports Server (NTRS)

    Parker, Jay

    1993-01-01

    One of the promising combinations of finite element approaches for scattering problems uses Whitney edge elements, spherical vector wave-absorbing boundary conditions, and bi-conjugate gradient solution for the frequency-domain near field. Each of these approaches may be criticized. Low-order elements require high mesh density, but also result in fast, reliable iterative convergence. Spherical wave-absorbing boundary conditions require additional space to be meshed beyond the most minimal near-space region, but result in fully sparse, symmetric matrices which keep storage and solution times low. Iterative solution is somewhat unpredictable and unfriendly to multiple right-hand sides, yet we find it to be uniformly fast on large problems to date, given the other two approaches. Implementation of these approaches on a distributed memory, message passing machine yields huge dividends, as full scalability to the largest machines appears assured and iterative solution times are well-behaved for large problems. We present times and solutions for computed RCS for a conducting cube and composite permeability/conducting sphere on the Intel ipsc860 with up to 16 processors solving over 200,000 unknowns. We estimate problems of approximately 10 million unknowns, encompassing 1000 cubic wavelengths, may be attempted on a currently available 512 processor machine, but would be exceedingly tedious to prepare. The most severe bottlenecks are due to the slow rate of mesh generation on non-parallel machines and the large transfer time from such a machine to the parallel processor. One solution, in progress, is to create and then distribute a coarse mesh among the processors, followed by systematic refinement within each processor. Elimination of redundant node definitions at the mesh-partition surfaces, snap-to-surface post processing of the resulting mesh for good modelling of curved surfaces, and load-balancing redistribution of new elements after the refinement are auxiliary

  12. Incorporating Knowledge of Legal and Ethical Aspects into Computing Curricula of South African Universities

    ERIC Educational Resources Information Center

    Wayman, Ian; Kyobe, Michael

    2012-01-01

    As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…

  13. Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.

    ERIC Educational Resources Information Center

    Deaudelin, Colette; Dussault, Marc; Brodeur, Monique

    2003-01-01

    Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…

  14. SYMBMAT: Symbolic computation of quantum transition matrix elements

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.

    2012-08-01

    We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem

  15. A variational multiscale finite element method for monolithic ALE computations of shock hydrodynamics using nodal elements

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Scovazzi, G.

    2016-06-01

    We present a monolithic arbitrary Lagrangian-Eulerian (ALE) finite element method for computing highly transient flows with strong shocks. We use a variational multiscale (VMS) approach to stabilize a piecewise-linear Galerkin formulation of the equations of compressible flows, and an entropy artificial viscosity to capture strong solution discontinuities. Our work demonstrates the feasibility of VMS methods for highly transient shock flows, an area of research for which the VMS literature is extremely scarce. In addition, the proposed monolithic ALE method is an alternative to the more commonly used Lagrangian+remap methods, in which, at each time step, a Lagrangian computation is followed by mesh smoothing and remap (conservative solution interpolation). Lagrangian+remap methods are the methods of choice in shock hydrodynamics computations because they provide nearly optimal mesh resolution in proximity of shock fronts. However, Lagrangian+remap methods are not well suited for imposing inflow and outflow boundary conditions. These issues offer an additional motivation for the proposed approach, in which we first perform the mesh motion, and then the flow computations using the monolithic ALE framework. The proposed method is second-order accurate and stable, as demonstrated by extensive numerical examples in two and three space dimensions.

  16. Computer-integrated finite element modeling of human middle ear.

    PubMed

    Sun, Q; Gan, R Z; Chang, K-H; Dormer, K J

    2002-10-01

    The objective of this study was to produce an improved finite element (FE) model of the human middle ear and to compare the model with human data. We began with a systematic and accurate geometric modeling technique for reconstructing the middle ear from serial sections of a freshly frozen temporal bone. A geometric model of a human middle ear was constructed in a computer-aided design (CAD) environment with particular attention to geometry and microanatomy. Using the geometric model, a working FE model of the human middle ear was created using previously published material properties of middle ear components. This working FE model was finalized by a cross-calibration technique, comparing its predicted stapes footplate displacements with laser Doppler interferometry measurements from fresh temporal bones. The final FE model was shown to be reasonable in predicting the ossicular mechanics of the human middle ear. PMID:14595544

  17. Finite element computations of resonant modes for small magnetic particles

    NASA Astrophysics Data System (ADS)

    Forestiere, C.; d'Aquino, M.; Miano, G.; Serpico, C.

    2009-04-01

    The oscillations of a chain of ferromagnetic nanoparticles around a saturated spatially uniform equilibrium are analyzed by solving the linearized Landau-Lifshitz-Gilbert (LLG) equation. The linearized LLG equation is recast in the form of a generalized eigenvalue problem for suitable self-adjoint operators connected to the micromagnetic effective field, which accounts for exchange, magnetostatic, anisotropy, and Zeeman interactions. The generalized eigenvalue problem is solved numerically by the finite element method, which allows one to treat accurately complex geometries and preserves the structural properties of the continuum problem. The natural frequencies and the spatial distribution of the mode amplitudes are computed for chains composed of several nanoparticles (sphere and ellipsoid). The effects of the interaction between the nanoparticles and the limit of validity of the point dipole approximation are discussed.

  18. Impact of computer advances on future finite elements computations. [for aircraft and spacecraft design

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.

    1985-01-01

    Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

  19. Matrix element method for high performance computing platforms

    NASA Astrophysics Data System (ADS)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  20. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  1. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    NASA Astrophysics Data System (ADS)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  2. A computer program for calculating aerodynamic characteristics of low aspect-ratio wings with partial leading-edge separation

    NASA Technical Reports Server (NTRS)

    Mehrotra, S. C.; Lan, C. E.

    1978-01-01

    The necessary information for using a computer program to predict distributed and total aerodynamic characteristics for low aspect ratio wings with partial leading-edge separation is presented. The flow is assumed to be steady and inviscid. The wing boundary condition is formulated by the Quasi-Vortex-Lattice method. The leading edge separated vortices are represented by discrete free vortex elements which are aligned with the local velocity vector at midpoints to satisfy the force free condition. The wake behind the trailing edge is also force free. The flow tangency boundary condition is satisfied on the wing, including the leading and trailing edges. The program is restricted to delta wings with zero thickness and no camber. It is written in FORTRAN language and runs on CDC 6600 computer.

  3. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  4. A finite element method for the computation of transonic flow past airfoils

    NASA Technical Reports Server (NTRS)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  5. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which

  6. Java Analysis Tools for Element Production Calculations in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Lingerfelt, E.; Hix, W.; Guidry, M.; Smith, M.

    2002-12-01

    We are developing a set of extendable, cross-platform tools and interfaces using Java and vector graphic technologies such as SVG and SWF to facilitate element production calculations in computational astrophysics. The Java technologies are customizable and portable, and can be utilized as stand-alone applications or distributed across a network. These tools, which have broad applications in general scientific visualization, are currently being used to explore and analyze a large library of nuclear reaction rates and visualize results of explosive nucleosynthesis calculations with compact, high quality vector graphics. The facilities for reading and plotting nuclear reaction rates and their components from a network or library permit the user to easily include new rates and compare and adjust current ones. Sophisticated visualization and graphical analysis tools offer the ability to view results in an interactive, scalable vector graphics format, which leads to a dramatic (ten-fold) reduction in visualization file sizes while maintaining high visual quality and interactive control. ORNL Physics Division is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  7. Segment-based vs. element-based integration for mortar methods in computational contact mechanics

    NASA Astrophysics Data System (ADS)

    Farah, Philipp; Popp, Alexander; Wall, Wolfgang A.

    2015-01-01

    Mortar finite element methods provide a very convenient and powerful discretization framework for geometrically nonlinear applications in computational contact mechanics, because they allow for a variationally consistent treatment of contact conditions (mesh tying, non-penetration, frictionless or frictional sliding) despite the fact that the underlying contact surface meshes are non-matching and possibly also geometrically non-conforming. However, one of the major issues with regard to mortar methods is the design of adequate numerical integration schemes for the resulting interface coupling terms, i.e. curve integrals for 2D contact problems and surface integrals for 3D contact problems. The way how mortar integration is performed crucially influences the accuracy of the overall numerical procedure as well as the computational efficiency of contact evaluation. Basically, two different types of mortar integration schemes, which will be termed as segment-based integration and element-based integration here, can be found predominantly in the literature. While almost the entire existing literature focuses on either of the two mentioned mortar integration schemes without questioning this choice, the intention of this paper is to provide a comprehensive and unbiased comparison. The theoretical aspects covered here include the choice of integration rule, the treatment of boundaries of the contact zone, higher-order interpolation and frictional sliding. Moreover, a new hybrid scheme is proposed, which beneficially combines the advantages of segment-based and element-based mortar integration. Several numerical examples are presented for a detailed and critical evaluation of the overall performance of the different schemes within several well-known benchmark problems of computational contact mechanics.

  8. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  9. C-arm cone-beam computed tomography in interventional oncology: technical aspects and clinical applications

    PubMed Central

    Floridi, Chiara; Radaelli, Alessandro; Abi-Jaoudeh, Nadine; Grass, Micheal; Lin, Ming De; Chiaradia, Melanie; Geschwind, Jean-Francois; Kobeiter, Hishman; Squillaci, Ettore; Maleux, Geert; Giovagnoni, Andrea; Brunese, Luca; Wood, Bradford; Carrafiello, Gianpaolo; Rotondo, Antonio

    2014-01-01

    C-arm cone-beam computed tomography (CBCT) is a new imaging technology integrated in modern angiographic systems. Due to its ability to obtain cross-sectional imaging and the possibility to use dedicated planning and navigation software, it provides an informed platform for interventional oncology procedures. In this paper, we highlight the technical aspects and clinical applications of CBCT imaging and navigation in the most common loco-regional oncological treatments. PMID:25012472

  10. Finite Element Technology In Forming Simulations - Theoretical Aspects And Practical Applications Of A New Solid-Shell Element

    SciTech Connect

    Schwarze, M.; Reese, S.

    2007-05-17

    Finite element simulations of sheet metal forming processes are highly non-linear problems. The non-linearity arises not only from the kinematical relations and the material formulation, furthermore the contact between workpiece and the forming tools leads to an increased number of iterations within the Newton-Raphson scheme. This fact puts high demands on the robustness of finite element formulations. For this reason we study the enhanced assumed strain (EAS) concept as proposed in [1]. The goal is to improve the robustness of the solid-shell formulation in deep drawing simulations.

  11. Surveying co-located space geodesy techniques for ITRF computation: statistical aspects

    NASA Astrophysics Data System (ADS)

    Sillard, P.; Sarti, P.; Vittuari, L.

    2003-04-01

    For two years, CNR (ITALY) has been involved in a complete renovation of the way Space Geodesy coloocated instruments are surveyed. Local ties are one of the most problematic part of International Terrestrial Reference Frame (ITRF) computation since the accuracy of Space Geodesy techniques has decreased to a few millimeters level. Therefore everybody now agrees on the fact that local ties are one of the most problematic aspects of the ITRF computation. The CNR has then decided to start a comprehensive reflection on the way local ties should be surveyed between Space Geodesy instruments. This reflection concerns the practical ground operations, the physical definition of a Space Geodesy instrument reference point (especially for VLBI), and the consequent adjustment of the results, as well as their publication. The two first aspects will be presented in an other presentation as the present one will focus on the two last points (statistics and publication). As Space Geodesy has now reached the mm level, local ties must be used in ITRF computation with a full variance covariance matrix available for one site. The talk will present the way this variance can be derived, even when the reference point is implicitly defined, like for VLBI. Some numerical examples will be given of the quality which can be reached through a rigorous statistical treatment of the new approach developed by CNR. The evidence of the significant improvement that can be seen of the ITRF-type computation will also be given.

  12. ElemeNT: a computational tool for detecting core promoter elements.

    PubMed

    Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar

    2015-01-01

    Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression. PMID:26226151

  13. 01010000 01001100 01000001 01011001: Play Elements in Computer Programming

    ERIC Educational Resources Information Center

    Breslin, Samantha

    2013-01-01

    This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…

  14. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  15. Improved plug valve computer-aided design of plug element

    SciTech Connect

    Wordin, J.J.

    1990-02-01

    The purpose of this document is to present derivations of equations for the design of a plug valve and to present a computer program which performs the design calculations based on the derivations. The valve is based on a plug formed from a tractrix of revolution called a pseudosphere. It is of interest to be able to calculate various parameters for the plug for design purposes. For example, the surface area, volume, and center of gravity are important to determine friction and wear of the valve. A computer program in BASIC has been written to perform the design calculations. The appendix contains a computer program listing and verifications of results using approximation methods. A sample run is included along with necessary computer commands to run the program. 1 fig.

  16. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  17. Computational Modeling for the Flow Over a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun

    1999-01-01

    The flow over a multi-element airfoil is computed using two two-equation turbulence models. The computations are performed using the INS2D) Navier-Stokes code for two angles of attack. Overset grids are used for the three-element airfoil. The computed results are compared with experimental data for the surface pressure, skin friction coefficient, and velocity magnitude. The computed surface quantities generally agree well with the measurement. The computed results reveal the possible existence of a mixing-layer-like region of flow next to the suction surface of the slat for both angles of attack.

  18. Some aspects of statistical distribution of trace element concentrations in biomedical samples

    NASA Astrophysics Data System (ADS)

    Majewska, U.; Braziewicz, J.; Banaś , D.; Kubala-Kukuś , A.; Góź Dź , S.; Pajek, M.; Zadrozsolarna, M.; Jaskóla, M.; Czyzsolarewski, T.

    1999-04-01

    Concentrations of trace elements in biomedical samples were studied using X-ray fluorescence (XRF), total reflection X-ray fluorescence (TRXRF) and particle-induced X-ray emission (PIXE) methods. Used analytical methods were compared in terms of their detection limits and applicability for studying the trace elements in large populations of biomedical samples. In a result, the XRF and TRXRF methods were selected to be used for the trace element concentration measurements in the urine and woman full-term placenta samples. The measured trace element concentration distributions were found to be strongly asymmetric and described by the logarithmic-normal distribution. Such a distribution is expected for the random sequential process, which realistically models a level of trace elements in studied biomedical samples. The importance and consequences of this finding are discussed, especially in the context of comparison of the concentration measurements in different populations of biomedical samples.

  19. Computational aspects of zonal algorithms for solving the compressible Navier-Stokes equations in three dimensions

    NASA Technical Reports Server (NTRS)

    Holst, T. L.; Thomas, S. D.; Kaynak, U.; Gundy, K. L.; Flores, J.; Chaderjian, N. M.

    1985-01-01

    Transonic flow fields about wing geometries are computed using an Euler/Navier-Stokes approach in which the flow field is divided into several zones. The flow field immediately adjacent to the wing surface is resolved with fine grid zones and solved using a Navier-Stokes algorithm. Flow field regions removed from the wing are resolved with less finely clustered grid zones and are solved with an Euler algorithm. Computational issues associated with this zonal approach, including data base management aspects, are discussed. Solutions are obtained that are in good agreement with experiment, including cases with significant wind tunnel wall effects. Additional cases with significant shock induced separation on the upper wing surface are also presented.

  20. Numerical algorithms for finite element computations on arrays of microprocessors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1981-01-01

    The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.

  1. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review.

    PubMed

    Bhattacharya, Preeti Tomar; Misra, Satya Ranjan; Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  2. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review

    PubMed Central

    Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  3. Formulation and computational aspects of plasticity and damage models with application to quasi-brittle materials

    SciTech Connect

    Chen, Z.; Schreyer, H.L.

    1995-09-01

    The response of underground structures and transportation facilities under various external loadings and environments is critical for human safety as well as environmental protection. Since quasi-brittle materials such as concrete and rock are commonly used for underground construction, the constitutive modeling of these engineering materials, including post-limit behaviors, is one of the most important aspects in safety assessment. From experimental, theoretical, and computational points of view, this report considers the constitutive modeling of quasi-brittle materials in general and concentrates on concrete in particular. Based on the internal variable theory of thermodynamics, the general formulations of plasticity and damage models are given to simulate two distinct modes of microstructural changes, inelastic flow and degradation of material strength and stiffness, that identify the phenomenological nonlinear behaviors of quasi-brittle materials. The computational aspects of plasticity and damage models are explored with respect to their effects on structural analyses. Specific constitutive models are then developed in a systematic manner according to the degree of completeness. A comprehensive literature survey is made to provide the up-to-date information on prediction of structural failures, which can serve as a reference for future research.

  4. Finite element computer model of microwave heated ceramics

    SciTech Connect

    Liqiu Zhou; Gang Liu; Jian Zhou

    1995-12-31

    In this paper, a 3-D finite element model to simulate the heating pattern during microwave sintering of ceramics in a TE{sub 10}{sup n} single mode rectangular cavity is described. A series of transient temperature profiles and heating rates of the ceramic cylinder and cubic sample were calculated versus different parameters such as thermal conductivity, dielectric loss factor, microwave power level, and microwave energy distribution. These numerical solutions may provide a better understanding of thermal runaway and solutions to microwave sintering of ceramics.

  5. Computational discovery of regulatory elements in a continuous expression space

    PubMed Central

    2012-01-01

    Approaches for regulatory element discovery from gene expression data usually rely on clustering algorithms to partition the data into clusters of co-expressed genes. Gene regulatory sequences are then mined to find overrepresented motifs in each cluster. However, this ad hoc partition rarely fits the biological reality. We propose a novel method called RED2 that avoids data clustering by estimating motif densities locally around each gene. We show that RED2 detects numerous motifs not detected by clustering-based approaches, and that most of these correspond to characterized motifs. RED2 can be accessed online through a user-friendly interface. PMID:23186104

  6. Elemental: a new framework for distributed memory dense matrix computations.

    SciTech Connect

    Romero, N.; Poulson, J.; Marker, B.; Hammond, J.; Van de Geijn, R.

    2012-02-14

    Parallelizing dense matrix computations to distributed memory architectures is a well-studied subject and generally considered to be among the best understood domains of parallel computing. Two packages, developed in the mid 1990s, still enjoy regular use: ScaLAPACK and PLAPACK. With the advent of many-core architectures, which may very well take the shape of distributed memory architectures within a single processor, these packages must be revisited since the traditional MPI-based approaches will likely need to be extended. Thus, this is a good time to review lessons learned since the introduction of these two packages and to propose a simple yet effective alternative. Preliminary performance results show the new solution achieves competitive, if not superior, performance on large clusters.

  7. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

    SciTech Connect

    XU, J.; COSTANTINO, C.; HOFMAYER, C.

    2006-06-26

    PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

  8. Computer modeling of batteries from non-linear circuit elements

    NASA Technical Reports Server (NTRS)

    Waaben, S.; Federico, J.; Moskowitz, I.

    1983-01-01

    A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.

  9. Computation of Schenberg response function by using finite element modelling

    NASA Astrophysics Data System (ADS)

    Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.

    2016-05-01

    Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.

  10. Experience with automatic, dynamic load balancing and adaptive finite element computation

    SciTech Connect

    Wheat, S.R.; Devine, K.D.; Maccabe, A.B.

    1993-10-01

    Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.

  11. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    ERIC Educational Resources Information Center

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  12. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  13. A computer program for anisotropic shallow-shell finite elements using symbolic integration

    NASA Technical Reports Server (NTRS)

    Andersen, C. M.; Bowen, J. T.

    1976-01-01

    A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.

  14. X-ray microanalysis of cultured keratinocytes: methodological aspects and effects of the irritant sodium lauryl sulphate on elemental composition.

    PubMed

    Grängsjö, A; Pihl-Lundin, I; Lindberg, M; Roomans, G M

    2000-09-01

    Irritant substances have been shown to induce elemental changes in human and animal epidermal cells in situ. However, skin biopsies are a complicated experimental system and artefacts can be introduced by the anaesthesia necessary to take the biopsy. We therefore attempted to set up an experimental system for X-ray microanalysis (XRMA) consisting of cultured human keratinocytes. A number of methodological aspects were studied: different cell types, washing methods and different culture periods for the keratinocytes. It was also investigated whether the keratinocytes responded to exposure to sodium lauryl sulphate (SLS) with changes in their elemental composition. The concentrations of biologically important elements such as Na, Mg, P and K were different in HaCaT cells (a spontaneously immortalized non-tumorigenic cell line derived from adult human keratinocytes) compared to natural human epidermal keratinocytes. The washing procedure and time of culture influenced the intracellular elemental content, and rinsing with distilled water was preferred for further experiments. Changes in the elemental content in the HaCaT cells compatible with a pattern of cell injury followed by repair by cell proliferation were seen after treatment with 3.33 microM and 33 microM SLS. We conclude that XRMA is a useful tool for the study of functional changes in cultured keratinocytes, even though the preparation methods have to be strictly controlled. The method can conceivably be used for predicting effects of different chemicals on human skin. PMID:10971801

  15. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    SciTech Connect

    2010-05-11

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  16. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    None

    2011-10-06

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  17. Computation of vibration mode elastic-rigid and effective weight coefficients from finite-element computer program output

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1991-01-01

    Post-processing algorithms are given to compute the vibratory elastic-rigid coupling matrices and the modal contributions to the rigid-body mass matrices and to the effective modal inertias and masses. Recomputation of the elastic-rigid coupling matrices for a change in origin is also described. A computational example is included. The algorithms can all be executed by using standard finite-element program eigenvalue analysis output with no changes to existing code or source programs.

  18. Modeling of Rolling Element Bearing Mechanics: Computer Program Updates

    NASA Technical Reports Server (NTRS)

    Ryan, S. G.

    1997-01-01

    The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.

  19. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  20. Multibody system dynamics for bio-inspired locomotion: from geometric structures to computational aspects.

    PubMed

    Boyer, Frédéric; Porez, Mathieu

    2015-04-01

    This article presents a set of generic tools for multibody system dynamics devoted to the study of bio-inspired locomotion in robotics. First, archetypal examples from the field of bio-inspired robot locomotion are presented to prepare the ground for further discussion. The general problem of locomotion is then stated. In considering this problem, we progressively draw a unified geometric picture of locomotion dynamics. For that purpose, we start from the model of discrete mobile multibody systems (MMSs) that we progressively extend to the case of continuous and finally soft systems. Beyond these theoretical aspects, we address the practical problem of the efficient computation of these models by proposing a Newton-Euler-based approach to efficient locomotion dynamics with a few illustrations of creeping, swimming, and flying. PMID:25811531

  1. Finite element solution techniques for large-scale problems in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1987-01-01

    Element-by-element approximate factorization, implicit-explicit and adaptive implicit-explicit approximation procedures are presented for the finite-element formulations of large-scale fluid dynamics problems. The element-by-element approximation scheme totally eliminates the need for formation, storage and inversion of large global matrices. Implicit-explicit schemes, which are approximations to implicit schemes, substantially reduce the computational burden associated with large global matrices. In the adaptive implicit-explicit scheme, the implicit elements are selected dynamically based on element level stability and accuracy considerations. This scheme provides implicit refinement where it is needed. The methods are applied to various problems governed by the convection-diffusion and incompressible Navier-Stokes equations. In all cases studied, the results obtained are indistinguishable from those obtained by the implicit formulations.

  2. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Large fractions of Cl and Br associated with separated anorthositic and basaltic clasts and matrix from rusty rock 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O to acid soluble Br, i.e. surface deposits vs possibly phosphate related Br, suggests no appreciable alteration in the original distributions of this element. Weak acid leaching dissolved approx. 50% or more of the phosphorus and of the remaining Cl from most of the breccia components. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. No dependence on degree of brecciation is indicated. The clasts are typical of Apollo 16 rocks. Matrix leaching results and element concentrations suggest that apatite-whitlockite is a component of KREEP.

  3. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Halogens, P, U and Na are reported in anorthositic and basaltic clasts and matrix from rusty rock 66095. Large fractions of Cl and Br associated with the separated phases from 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O- to 0.1 M HNO/sub 3/-soluble Br in the various components suggests no appreciable alteration in the original distributions of this element in the breccia forming processes. Up to 50% or more of the phosphorus and of the non-H/sub 2/O-soluble Cl was dissolved from most of the breccia components by 0.1 M HNO/sub 3/. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. Evidence that phosphates are the major P-phases in the brecia is based on the 0.1 M acid solubility of Cl and P in the matrix sample and on elemental concentrations which are consistent with those of KREEP.

  4. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  5. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  6. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  7. Mixing characteristics of injector elements in liquid rocket engines - A computational study

    NASA Technical Reports Server (NTRS)

    Lohr, Jonathan C.; Trinh, Huu P.

    1992-01-01

    A computational study has been performed to better understand the mixing characteristics of liquid rocket injector elements. Variations in injector geometry as well as differences in injector element inlet flow conditions are among the areas examined in the study. Most results involve the nonreactive mixing of gaseous fuel with gaseous oxidizer but preliminary results are included that involve the spray combustion of oxidizer droplets. The purpose of the study is to numerically predict flowfield behavior in individual injector elements to a high degree of accuracy and in doing so to determine how various injector element properties affect the flow.

  8. Numerical Aspects of Eigenvalue and Eigenfunction Computations for Chaotic Quantum Systems

    NASA Astrophysics Data System (ADS)

    Bäcker, A.

    Summary: We give an introduction to some of the numerical aspects in quantum chaos. The classical dynamics of two-dimensional area-preserving maps on the torus is illustrated using the standard map and a perturbed cat map. The quantization of area-preserving maps given by their generating function is discussed and for the computation of the eigenvalues a computer program in Python is presented. We illustrate the eigenvalue distribution for two types of perturbed cat maps, one leading to COE and the other to CUE statistics. For the eigenfunctions of quantum maps we study the distribution of the eigenvectors and compare them with the corresponding random matrix distributions. The Husimi representation allows for a direct comparison of the localization of the eigenstates in phase space with the corresponding classical structures. Examples for a perturbed cat map and the standard map with different parameters are shown. Billiard systems and the corresponding quantum billiards are another important class of systems (which are also relevant to applications, for example in mesoscopic physics). We provide a detailed exposition of the boundary integral method, which is one important method to determine the eigenvalues and eigenfunctions of the Helmholtz equation. We discuss several methods to determine the eigenvalues from the Fredholm equation and illustrate them for the stadium billiard. The occurrence of spurious solutions is discussed in detail and illustrated for the circular billiard, the stadium billiard, and the annular sector billiard. We emphasize the role of the normal derivative function to compute the normalization of eigenfunctions, momentum representations or autocorrelation functions in a very efficient and direct way. Some examples for these quantities are given and discussed.

  9. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  10. A finite element computational method for high Reynolds number laminar flows

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook

    1987-01-01

    A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for the Navier-Stokes equations is presented. In the method, the velocity variables are interpolated using complete quadratic shape functions, and the pressure is interpolated using linear shape functions which are defined on a triangular element for the two-dimensional case and on a tetrahedral element for the three-dimensional case. The triangular element and the tetrahedral element are contained inside the complete bi- and tri-quadratic elements for velocity variables for two and three dimensional cases, respectively, so that the pressure is discontinuous across the element boundaries. Example problems considered include: a cavity flow of Reynolds numbers 400 through 10,000; a laminar backward facing step flow; and a laminar flow in a square duct of strong curvature. The computational results compared favorably with the finite difference computational results and/or experimental data available. It was found that the present method can capture the delicate pressure driven recirculation zones, that the method did not yield any spurious pressure modes, and that the method requires fewer grid points than the finite difference methods to obtain comparable computational results.

  11. Computational local stiffness analysis of biological cell: High aspect ratio single wall carbon nanotube tip.

    PubMed

    TermehYousefi, Amin; Bagheri, Samira; Shahnazar, Sheida; Rahman, Md Habibur; Kadri, Nahrizul Adib

    2016-02-01

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nanoscale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems, which is a powerful finite element (FE) tool to perform the numerical analysis and visualize the interactions between proposed tip and membrane of the cell. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney-Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well as the applied force of CNT-AFM tip on the contact area of the cell. This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis. PMID:26652417

  12. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  13. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  14. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  15. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  16. A New Finite Element Approach for Prediction of Aerothermal Loads: Progress in Inviscid Flow Computations

    NASA Technical Reports Server (NTRS)

    Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.

    1985-01-01

    Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.

  17. A different aspect to use of some soft computing methods for landslide susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Akgün, Aykut

    2014-05-01

    In landslide literature, several applications of soft computing methods such as artifical neural networks (ANN), fuzzy inference systems, and decision trees for landslide susceptibility mapping can be found. In many of these studies, the effectiveness and validation of the models used are also discussed. To carry out analyses, more than one software, for example one statistical package and one geographical information systems software (GIS), are generally used together. In this study, four different soft computing techniques were applied for obtaining landslide susceptibility mapping only by one GIS software. For this purpose, Multi Layer Perceptron (MLP) back propagation neural network, Fuzzy Adaptive Resonance Theory (ARTMAP) neural network, Self-organizing Map (SOM) and Classification Tree Analysis (CTA) approaches were applied to the study area. The study area was selected from a part of Trabzon (North Turkey) city which is one of the most landslide prone areas in Turkey. Initially, five landslide conditioning parameters such as lithology, slope gradient, slope aspect, stream power index (SPI), and topographical wetness index (TWI) for the study area were produced in GIS. Then, these parameters were analysed by MLP, Fuzzy ARTMAP, SOM and CART soft computing classifiers of the IDRISI Taiga GIS and remote sensing software. To accomplish the analyses, two main input groups are needed. These are conditioning parameters and training areas. For training areas, initially, landslide inventory map which was obtained by both field studies and topographical analyses was compared with lithological unit classes. With the help of these comparison, frequency ratio (FR) values of landslide occurrence in the study area were determined. Using the FR values, five landslide susceptibility classes were differentiated from the lowest FR to highest FR values. After this differentiation, the training areas representing the landslide susceptibility classes were determined by using FR

  18. Determination of an Initial Mesh Density for Finite Element Computations via Data Mining

    SciTech Connect

    Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V

    2001-07-23

    Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.

  19. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  20. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  1. Numerical Aspects of Nonhydrostatic Implementations Applied to a Parallel Finite Element Tsunami Model

    NASA Astrophysics Data System (ADS)

    Fuchs, A.; Androsov, A.; Harig, S.; Hiller, W.; Rakowsky, N.

    2012-04-01

    Based on the jeopardy of devastating tsunamis and the unpredictability of such events, tsunami modelling as part of warning systems is still a contemporary topic. The tsunami group of Alfred Wegener Institute developed the simulation tool TsunAWI as contribution to the Early Warning System in Indonesia. Although the precomputed scenarios for this purpose qualify for satisfying deliverables, the study of further improvements continues. While TsunAWI is governed by the Shallow Water Equations, an extension of the model is based on a nonhydrostatic approach. At the arrival of a tsunami wave in coastal regions with rough bathymetry, the term containing the nonhydrostatic part of pressure, that is neglected in the original hydrostatic model, gains in importance. In consideration of this term, a better approximation of the wave is expected. Differences of hydrostatic and nonhydrostatic model results are contrasted in the standard benchmark problem of a solitary wave runup on a plane beach. The observation data provided by Titov and Synolakis (1995) serves as reference. The nonhydrostatic approach implies a set of equations that are similar to the Shallow Water Equations, so the variation of the code can be implemented on top. However, this additional routines cause a lot of issues you have to cope with. So far the computations of the model were purely explicit. In the nonhydrostatic version the determination of an additional unknown and the solution of a large sparse system of linear equations is necessary. The latter constitutes the lion's share of computing time and memory requirement. Since the corresponding matrix is only symmetric in structure and not in values, an iterative Krylov Subspace Method is used, in particular the restarted Generalized Minimal Residual Algorithm GMRES(m). With regard to optimization, we present a comparison of several combinations of sequential and parallel preconditioning techniques respective number of iterations and setup

  2. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  3. ParCYCLIC: finite element modelling of earthquake liquefaction response on parallel computers

    NASA Astrophysics Data System (ADS)

    Peng, Jun; Lu, Jinchi; Law, Kincho H.; Elgamal, Ahmed

    2004-10-01

    This paper presents the computational procedures and solution strategy employed in ParCYCLIC, a parallel non-linear finite element program developed based on an existing serial code CYCLIC for the analysis of cyclic seismically-induced liquefaction problems. In ParCYCLIC, finite elements are employed within an incremental plasticity, coupled solid-fluid formulation. A constitutive model developed for simulating liquefaction-induced deformations is a main component of this analysis framework. The elements of the computational strategy, designed for distributed-memory message-passing parallel computer systems, include: (a) an automatic domain decomposer to partition the finite element mesh; (b) nodal ordering strategies to minimize storage space for the matrix coefficients; (c) an efficient scheme for the allocation of sparse matrix coefficients among the processors; and (d) a parallel sparse direct solver. Application of ParCYCLIC to simulate 3-D geotechnical experimental models is demonstrated. The computational results show excellent parallel performance and scalability of ParCYCLIC on parallel computers with a large number of processors. Copyright

  4. Learning the Lexical Aspects of a Second Language at Different Proficiencies: A Neural Computational Study

    ERIC Educational Resources Information Center

    Cuppini, Cristiano; Magosso, Elisa; Ursino, Mauro

    2013-01-01

    We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a…

  5. Computing forces on interface elements exerted by dislocations in an elastically anisotropic crystalline material

    NASA Astrophysics Data System (ADS)

    Liu, B.; Arsenlis, A.; Aubry, S.

    2016-06-01

    Driven by the growing interest in numerical simulations of dislocation–interface interactions in general crystalline materials with elastic anisotropy, we develop algorithms for the integration of interface tractions needed to couple dislocation dynamics with a finite element or boundary element solver. The dislocation stress fields in elastically anisotropic media are made analytically accessible through the spherical harmonics expansion of the derivative of Green’s function, and analytical expressions for the forces on interface elements are derived by analytically integrating the spherical harmonics series recursively. Compared with numerical integration by Gaussian quadrature, the newly developed analytical algorithm for interface traction integration is highly beneficial in terms of both computation precision and speed.

  6. Self-Consistent Large-Scale Magnetosphere-Ionosphere Coupling: Computational Aspects and Experiments

    NASA Technical Reports Server (NTRS)

    Newman, Timothy S.

    2003-01-01

    Both external and internal phenomena impact the terrestrial magnetosphere. For example, solar wind and particle precipitation effect the distribution of hot plasma in the magnetosphere. Numerous models exist to describe different aspects of magnetosphere characteristics. For example, Tsyganenko has developed a series of models (e.g., [TSYG89]) that describe the magnetic field, and Stern [STER75] and Volland [VOLL73] have developed an analytical model that describes the convection electric field. Over the past several years, NASA colleague Khazanov, working with Fok and others, has developed a large-scale coupled model that tracks particle flow to determine hot ion and electron phase space densities in the magnetosphere. This model utilizes external data such as solar wind densities and velocities and geomagnetic indices (e.g., Kp) to drive computational processes that evaluate magnetic, electric field, and plasma sheet models at any time point. These models are coupled such that energetic ion and electron fluxes are produced, with those fluxes capable of interacting with the electric field model. A diagrammatic representation of the coupled model is shown.

  7. 3D parallel computations of turbofan noise propagation using a spectral element method

    NASA Astrophysics Data System (ADS)

    Taghaddosi, Farzad

    2006-12-01

    A three-dimensional code has been developed for the simulation of tone noise generated by turbofan engine inlets using computational aeroacoustics. The governing equations are the linearized Euler equations, which are further simplified to a set of equations in terms of acoustic potential, using the irrotational flow assumption, and subsequently solved in the frequency domain. Due to the special nature of acoustic wave propagation, the spatial discretization is performed using a spectral element method, where a tensor product of the nth-degree polynomials based on Chebyshev orthogonal functions is used to approximate variations within hexahedral elements. Non-reflecting boundary conditions are imposed at the far-field using a damping layer concept. This is done by augmenting the continuity equation with an additional term without modifying the governing equations as in PML methods. Solution of the linear system of equations for the acoustic problem is based on the Schur complement method, which is a nonoverlapping domain decomposition technique. The Schur matrix is first solved using a matrix-free iterative method, whose convergence is accelerated with a novel local preconditioner. The solution in the entire domain is then obtained by finding solutions in smaller subdomains. The 3D code also contains a mean flow solver based on the full potential equation in order to take into account the effects of flow variations around the nacelle on the scattering of the radiated sound field. All aspects of numerical simulations, including building and assembling the coefficient matrices, implementation of the Schur complement method, and solution of the system of equations for both the acoustic and mean flow problems are performed on multiprocessors in parallel using the resources of the CLUMEQ Supercomputer Center. A large number of test cases are presented, ranging in size from 100 000-2 000 000 unknowns for which, depending on the size of the problem, between 8-48 CPU's are

  8. A FORTRAN computer code for calculating flows in multiple-blade-element cascades

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1985-01-01

    A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.

  9. A Computational and Experimental Study of Nonlinear Aspects of Induced Drag

    NASA Technical Reports Server (NTRS)

    Smith, Stephen C.

    1996-01-01

    Despite the 80-year history of classical wing theory, considerable research has recently been directed toward planform and wake effects on induced drag. Nonlinear interactions between the trailing wake and the wing offer the possibility of reducing drag. The nonlinear effect of compressibility on induced drag characteristics may also influence wing design. This thesis deals with the prediction of these nonlinear aspects of induced drag and ways to exploit them. A potential benefit of only a few percent of the drag represents a large fuel savings for the world's commercial transport fleet. Computational methods must be applied carefully to obtain accurate induced drag predictions. Trefftz-plane drag integration is far more reliable than surface pressure integration, but is very sensitive to the accuracy of the force-free wake model. The practical use of Trefftz plane drag integration was extended to transonic flow with the Tranair full-potential code. The induced drag characteristics of a typical transport wing were studied with Tranair, a full-potential method, and A502, a high-order linear panel method to investigate changes in lift distribution and span efficiency due to compressibility. Modeling the force-free wake is a nonlinear problem, even when the flow governing equation is linear. A novel method was developed for computing the force-free wake shape. This hybrid wake-relaxation scheme couples the well-behaved nature of the discrete vortex wake with viscous-core modeling and the high-accuracy velocity prediction of the high-order panel method. The hybrid scheme produced converged wake shapes that allowed accurate Trefftz-plane integration. An unusual split-tip wing concept was studied for exploiting nonlinear wake interaction to reduced induced drag. This design exhibits significant nonlinear interactions between the wing and wake that produced a 12% reduction in induced drag compared to an equivalent elliptical wing at a lift coefficient of 0.7. The

  10. Interactive computer graphic surface modeling of three-dimensional solid domains for boundary element analysis

    NASA Technical Reports Server (NTRS)

    Perucchio, R.; Ingraffea, A. R.

    1984-01-01

    The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.

  11. COYOTE: a finite-element computer program for nonlinear heat-conduction problems

    SciTech Connect

    Gartling, D.K.

    1982-10-01

    COYOTE is a finite element computer program designed for the solution of two-dimensional, nonlinear heat conduction problems. The theoretical and mathematical basis used to develop the code is described. Program capabilities and complete user instructions are presented. Several example problems are described in detail to demonstrate the use of the program.

  12. Automatic data generation scheme for finite-element method /FEDGE/ - Computer program

    NASA Technical Reports Server (NTRS)

    Akyuz, F.

    1970-01-01

    Algorithm provides for automatic input data preparation for the analysis of continuous domains in the fields of structural analysis, heat transfer, and fluid mechanics. The computer program utilizes the natural coordinate systems concept and the finite element method for data generation.

  13. Spectral element computation of high-frequency leaky modes in three-dimensional solid waveguides

    NASA Astrophysics Data System (ADS)

    Treyssède, F.

    2016-06-01

    A numerical method is proposed to compute high-frequency low-leakage modes in structural waveguides surrounded by infinite solid media. In order to model arbitrary shape structures, a waveguide formulation is used, which consists of applying to the elastodynamic equilibrium equations a space Fourier transform along the waveguide axis and then a discretization method to the cross-section coordinates. However several numerical issues must be faced related to the unbounded nature of the cross-section, the number of degrees of freedom required to achieve an acceptable error in the high-frequency regime as well as the number of modes to compute. In this paper, these issues are circumvented by applying perfectly matched layers (PML) along the cross-section directions, a high-order spectral element method for the discretization of the cross-section, and an eigensolver shift suited for the computation of low-leakage modes. First, computations are performed for an embedded cylindrical bar, for which literature results are available. The proposed PML waveguide formulation yields good agreement with literature results, even in the case of weak impedance contrast. Its performance with high-order spectral elements is assessed in terms of convergence and accuracy and compared to traditional low-order finite elements. Then, computations are performed for an embedded square bar. Dispersion curves exhibit strong similarities with cylinders. These results show that the properties of low-leakage modes observed in cylindrical bars can also occur in other types of geometry.

  14. Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)

    2000-01-01

    The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.

  15. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  16. Experimental and Computational Investigation of Lift-Enhancing Tabs on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1996-01-01

    An experimental and computational investigation of the effect of lift-enhancing tabs on a two-element airfoil has been conducted. The objective of the study was to develop an understanding of the flow physics associated with lift-enhancing tabs on a multi-element airfoil. An NACA 63(2)-215 ModB airfoil with a 30% chord fowler flap was tested in the NASA Ames 7- by 10-Foot Wind Tunnel. Lift-enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. A combination of tabs located at the main element and flap trailing edges increased the airfoil lift coefficient by 11% relative to the highest lift coefficient achieved by any baseline configuration at an angle of attack of 0 deg, and C(sub 1max) was increased by 3%. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computed results predicted all of the trends observed in the experimental data quite well. In addition, a simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift-enhancing tabs work. The tabs were modeled by a point vortex at the air-foil or flap trailing edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift-enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.

  17. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  18. Applications of Parallel Computation in Micro-Mechanics and Finite Element Method

    NASA Technical Reports Server (NTRS)

    Tan, Hui-Qian

    1996-01-01

    This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.

  19. COYOTE II - a finite element computer program for nonlinear heat conduction problems. Part I - theoretical background

    SciTech Connect

    Gartling, D.K.; Hogan, R.E.

    1994-10-01

    The theoretical and numerical background for the finite element computer program, COYOTE II, is presented in detail. COYOTE II is designed for the multi-dimensional analysis of nonlinear heat conduction problems and other types of diffusion problems. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in COYOTE II are also outlined. Instructions for use of the code are documented in SAND94-1179; examples of problems analyzed with the code are provided in SAND94-1180.

  20. Level set discrete element method for three-dimensional computations with triaxial case study

    NASA Astrophysics Data System (ADS)

    Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.

    2016-06-01

    In this paper, we outline the level set discrete element method (LS-DEM) which is a discrete element method variant able to simulate systems of particles with arbitrary shape using level set functions as a geometric basis. This unique formulation allows seamless interfacing with level set-based characterization methods as well as computational ease in contact calculations. We then apply LS-DEM to simulate two virtual triaxial specimens generated from XRCT images of experiments and demonstrate LS-DEM's ability to quantitatively capture and predict stress-strain and volume-strain behavior observed in the experiments.

  1. MAPVAR - A Computer Program to Transfer Solution Data Between Finite Element Meshes

    SciTech Connect

    Wellman, G.W.

    1999-03-01

    MAPVAR, as was the case with its precursor programs, MERLIN and MERLIN II, is designed to transfer solution results from one finite element mesh to another. MAPVAR draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR are described. User instructions are presented. Example problems are included to demonstrate the operation of the code and the effects of various input options.

  2. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  3. NACHOS 2: A finite element computer program for incompressible flow problems. Part 1: Theoretical background

    NASA Astrophysics Data System (ADS)

    Gartling, D. K.

    1987-04-01

    The theoretical and numerical background for the finite element computer program, NACHOS 2, is presented in detail. The NACHOS 2 code is designed for the two-dimensional analysis of viscous incompressible fluid flows, including the effects of heat transfer and/or other transport processes. A general description of the boundary value problems treated by the program is presented. The finite element formulations and the associated numerical methods used in the NACHOS 2 code are also outlined. Instructions for use of the program are documented in SAND-86-1817; examples of problems analyzed by the code are provided in SAND-86-1818.

  4. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.

  5. Report of a Workshop on the Pedagogical Aspects of Computational Thinking

    ERIC Educational Resources Information Center

    National Academies Press, 2011

    2011-01-01

    In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking…

  6. Program design by a multidisciplinary team. [for structural finite element analysis on STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Voigt, S.

    1975-01-01

    The use of software engineering aids in the design of a structural finite-element analysis computer program for the STAR-100 computer is described. Nested functional diagrams to aid in communication among design team members were used, and a standardized specification format to describe modules designed by various members was adopted. This is a report of current work in which use of the functional diagrams provided continuity and helped resolve some of the problems arising in this long-running part-time project.

  7. Finite element analysis and computer graphics visualization of flow around pitching and plunging airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.

    1973-01-01

    A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.

  8. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  9. A partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computer

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  10. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  11. Efficient computation of Hamiltonian matrix elements between non-orthogonal Slater determinants

    NASA Astrophysics Data System (ADS)

    Utsuno, Yutaka; Shimizu, Noritaka; Otsuka, Takaharu; Abe, Takashi

    2013-01-01

    We present an efficient numerical method for computing Hamiltonian matrix elements between non-orthogonal Slater determinants, focusing on the most time-consuming component of the calculation that involves a sparse array. In the usual case where many matrix elements should be calculated, this computation can be transformed into a multiplication of dense matrices. It is demonstrated that the present method based on the matrix-matrix multiplication attains ˜80% of the theoretical peak performance measured on systems equipped with modern microprocessors, a factor of 5-10 better than the normal method using indirectly indexed arrays to treat a sparse array. The reason for such different performances is discussed from the viewpoint of memory access.

  12. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  13. An accurate quadrature technique for the contact boundary in 3D finite element computations

    NASA Astrophysics Data System (ADS)

    Duong, Thang X.; Sauer, Roger A.

    2015-01-01

    This paper presents a new numerical integration technique for 3D contact finite element implementations, focusing on a remedy for the inaccurate integration due to discontinuities at the boundary of contact surfaces. The method is based on the adaptive refinement of the integration domain along the boundary of the contact surface, and is accordingly denoted RBQ for refined boundary quadrature. It can be used for common element types of any order, e.g. Lagrange, NURBS, or T-Spline elements. In terms of both computational speed and accuracy, RBQ exhibits great advantages over a naive increase of the number of quadrature points. Also, the RBQ method is shown to remain accurate for large deformations. Furthermore, since the sharp boundary of the contact surface is determined, it can be used for various purposes like the accurate post-processing of the contact pressure. Several examples are presented to illustrate the new technique.

  14. Poisson Green's function method for increased computational efficiency in numerical calculations of Coulomb coupling elements

    NASA Astrophysics Data System (ADS)

    Zimmermann, Anke; Kuhn, Sandra; Richter, Marten

    2016-01-01

    Often, the calculation of Coulomb coupling elements for quantum dynamical treatments, e.g., in cluster or correlation expansion schemes, requires the evaluation of a six dimensional spatial integral. Therefore, it represents a significant limiting factor in quantum mechanical calculations. If the size or the complexity of the investigated system increases, many coupling elements need to be determined. The resulting computational constraints require an efficient method for a fast numerical calculation of the Coulomb coupling. We present a computational method to reduce the numerical complexity by decreasing the number of spatial integrals for arbitrary geometries. We use a Green's function formulation of the Coulomb coupling and introduce a generalized scalar potential as solution of a generalized Poisson equation with a generalized charge density as the inhomogeneity. That enables a fast calculation of Coulomb coupling elements and, additionally, a straightforward inclusion of boundary conditions and arbitrarily spatially dependent dielectrics through the Coulomb Green's function. Particularly, if many coupling elements are included, the presented method, which is not restricted to specific symmetries of the model, presents a promising approach for increasing the efficiency of numerical calculations of the Coulomb interaction. To demonstrate the wide range of applications, we calculate internanostructure couplings, such as the Förster coupling, and illustrate the inclusion of symmetry considerations in the method for the Coulomb coupling between bound quantum dot states and unbound continuum states.

  15. Computational micromechanical analysis of the representative volume element of bituminous composite materials

    NASA Astrophysics Data System (ADS)

    Ozer, Hasan; Ghauch, Ziad G.; Dhasmana, Heena; Al-Qadi, Imad L.

    2016-03-01

    Micromechanical computational modeling is used in this study to determine the smallest domain, or Representative Volume Element (RVE), that can be used to characterize the effective properties of composite materials such as Asphalt Concrete (AC). Computational Finite Element (FE) micromechanical modeling was coupled with digital image analysis of surface scans of AC specimens. Three mixtures with varying Nominal Maximum Aggregate Size (NMAS) of 4.75 mm, 12.5 mm, and 25 mm, were prepared for digital image analysis and computational micromechanical modeling. The effects of window size and phase modulus mismatch on the apparent viscoelastic response of the composite were numerically examined. A good agreement was observed in the RVE size predictions based on micromechanical computational modeling and image analysis. Micromechanical results indicated that a degradation in the matrix stiffness increases the corresponding RVE size. Statistical homogeneity was observed for window sizes equal to two to three times the NMAS. A model was presented for relating the degree of statistical homogeneity associated with each window size for materials with varying inclusion dimensions.

  16. Computational micromechanical analysis of the representative volume element of bituminous composite materials

    NASA Astrophysics Data System (ADS)

    Ozer, Hasan; Ghauch, Ziad G.; Dhasmana, Heena; Al-Qadi, Imad L.

    2016-08-01

    Micromechanical computational modeling is used in this study to determine the smallest domain, or Representative Volume Element (RVE), that can be used to characterize the effective properties of composite materials such as Asphalt Concrete (AC). Computational Finite Element (FE) micromechanical modeling was coupled with digital image analysis of surface scans of AC specimens. Three mixtures with varying Nominal Maximum Aggregate Size (NMAS) of 4.75 mm, 12.5 mm, and 25 mm, were prepared for digital image analysis and computational micromechanical modeling. The effects of window size and phase modulus mismatch on the apparent viscoelastic response of the composite were numerically examined. A good agreement was observed in the RVE size predictions based on micromechanical computational modeling and image analysis. Micromechanical results indicated that a degradation in the matrix stiffness increases the corresponding RVE size. Statistical homogeneity was observed for window sizes equal to two to three times the NMAS. A model was presented for relating the degree of statistical homogeneity associated with each window size for materials with varying inclusion dimensions.

  17. Suite of finite element algorithms for accurate computation of soft tissue deformation for surgical simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2008-01-01

    Real time computation of soft tissue deformation is important for the use of augmented reality devices and for providing haptic feedback during operation or surgeon training. This requires algorithms that are fast, accurate and can handle material nonlinearities and large deformations. A set of such algorithms is presented in this paper, starting with the finite element formulation and the integration scheme used and addressing common problems such as hourglass control and locking. The computation examples presented prove that by using these algorithms, real time computations become possible without sacrificing the accuracy of the results. For a brain model having more than 7000 degrees of freedom, we computed the reaction forces due to indentation with frequency of around 1000 Hz using a standard dual core PC. Similarly, we conducted simulation of brain shift using a model with more than 50 000 degrees of freedom in less than a minute. The speed benefits of our models results from combining the Total Lagrangian formulation with explicit time integration and low order finite elements. PMID:19152791

  18. Fiber pushout test - A three-dimensional finite element computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Chamis, Christos C.

    1991-01-01

    A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computational very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictioal stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.

  19. Computation of consistent boundary quantities in finite element thermal-fluid solutions

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.

    1982-01-01

    The consistent boundary quantity method for computing derived quantities from finite element nodal variable solutions is investigated. The method calculates consistent, continuous boundary surface quantities such as heat fluxes, flow velocities, and surface tractions from nodal variables such as temperatures, velocity potentials, and displacements. Consistent and lumped coefficient matrix solutions for such problems are compared. The consistent approach may produce more accurate boundary quantities, but spurious oscillations may be produced in the vicinity of discontinuities. The uncoupled computations of the lumped approach provide greater flexibility in dealing with discontinuities and provide increased computational efficiency. The consistent boundary quantity approach can be applied to solution boundaries other than those with Dirichlet boundary conditions, and provides more accurate results than the customary method of differentiation of interpolation polynomials.

  20. Fiber pushout test: A three-dimensional finite element computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Chamis, Christos C.

    1990-01-01

    A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.

  1. Experimental and computational investigation of lift-enhancing tabs on a multi-element airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale

    1996-01-01

    An experimental and computational investigation of the effect of lift enhancing tabs on a two-element airfoil was conducted. The objective of the study was to develop an understanding of the flow physics associated with lift enhancing tabs on a multi-element airfoil. A NACA 63(sub 2)-215 ModB airfoil with a 30 percent chord Fowler flap was tested in the NASA Ames 7 by 10 foot wind tunnel. Lift enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computer results predict all of the trends in the experimental data quite well. When the flow over the flap upper surface is attached, tabs mounted at the main element trailing edge (cove tabs) produce very little change in lift. At high flap deflections. however, the flow over the flap is separated and cove tabs produce large increases in lift and corresponding reductions in drag by eliminating the separated flow. Cove tabs permit high flap deflection angles to be achieved and reduce the sensitivity of the airfoil lift to the size of the flap gap. Tabs attached to the flap training edge (flap tabs) are effective at increasing lift without significantly increasing drag. A combination of a cove tab and a flap tab increased the airfoil lift coefficient by 11 percent relative to the highest lift tab coefficient achieved by any baseline configuration at an angle of attack of zero percent and the maximum lift coefficient was increased by more than 3 percent. A simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift enhancing tabs work. The tabs were modeled by a point vortex at the training edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift enhancing tabs on a multi-element airfoil. Results of the modeling

  2. Computations of Disturbance Amplification Behind Isolated Roughness Elements and Comparison with Measurements

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Li, Fei; Bynum, Michael; Kegerise, Michael; King, Rudolph

    2015-01-01

    Computations are performed to study laminar-turbulent transition due to isolated roughness elements in boundary layers at Mach 3.5 and 5.95, with an emphasis on flow configurations for which experimental measurements from low disturbance wind tunnels are available. The Mach 3.5 case corresponds to a roughness element with right-triangle planform with hypotenuse that is inclined at 45 degrees with respect to the oncoming stream, presenting an obstacle with spanwise asymmetry. The Mach 5.95 case corresponds to a circular roughness element along the nozzle wall of the Purdue BAMQT wind tunnel facility. In both cases, the mean flow distortion due to the roughness element is characterized by long-lived streamwise streaks in the roughness wake, which can support instability modes that did not exist in the absence of the roughness element. The linear amplification characteristics of the wake flow are examined towards the eventual goal of developing linear growth correlations for the onset of transition.

  3. Computational Analysis of Enhanced Magnetic Bioseparation in Microfluidic Systems with Flow-Invasive Magnetic Elements

    PubMed Central

    Khashan, S. A.; Alazzam, A.; Furlani, E. P.

    2014-01-01

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437

  4. Parallel Computing of Multi-scale Finite Element Sheet Forming Analyses Based on Crystallographic Homogenization Method

    SciTech Connect

    Kuramae, Hiroyuki; Okada, Kenji; Uetsuji, Yasutomo; Nakamachi, Eiji; Tam, Nguyen Ngoc; Nakamura, Yasunori

    2005-08-05

    Since the multi-scale finite element analysis (FEA) requires large computation time, development of the parallel computing technique for the multi-scale analysis is inevitable. A parallel elastic/crystalline viscoplastic FEA code based on a crystallographic homogenization method has been developed using PC cluster. The homogenization scheme is introduced to compute macro-continuum plastic deformations and material properties by considering a polycrystal texture. Since the dynamic explicit method is applied to this method, the analysis using micro crystal structures computes the homogenized stresses in parallel based on domain partitioning of macro-continuum without solving simultaneous linear equations. The micro-structure is defined by the Scanning Electron Microscope (SEM) and the Electron Back Scan Diffraction (EBSD) measurement based crystal orientations. In order to improve parallel performance of elastoplasticity analysis, which dynamically and partially increases computational costs during the analysis, a dynamic workload balancing technique is introduced to the parallel analysis. The technique, which is an automatic task distribution method, is realized by adaptation of subdomain size for macro-continuum to maintain the computational load balancing among cluster nodes. The analysis code is applied to estimate the polycrystalline sheet metal formability.

  5. Methodological aspects of using IBM and macintosh PC'S for computational experiments in the physics practicum

    NASA Astrophysics Data System (ADS)

    Starodubtsev, V. A.; Malyutin, V. M.; Chernov, I. P.

    1996-07-01

    This article considers attempts to develop and use, in the teaching process, computer-laboratory work performed by students in ternimal-based classes. We describe the methodological features of the LABPK1 and LABPK2 programs, which are intended for use on local networks using 386/286 IBM PC compatibles or Macintosh computers.

  6. Analytical calculation of the lower bound on timing resolution for PET scintillation detectors comprising high-aspect-ratio crystal elements.

    PubMed

    Cates, Joshua W; Vinke, Ruud; Levin, Craig S

    2015-07-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector's timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3 × 3 × 20 mm(3) LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162 ± 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  7. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  8. On-Board Computing Subsystem for MIRAX: Architectural and Interface Aspects

    SciTech Connect

    Santiago, Valdivino

    2006-06-09

    This paper presents some proposals of architecture and interfaces among the different types of processing units of MIRAX on-board computing subsystem. MIRAX satellite payload is composed of dedicated computers, two Hard X-Ray cameras and one Soft X-Ray camera (WFC flight spare unit from BeppoSAX satellite). The architectures for the On-Board Computing Subsystem will take into account hardware or software solution of the event preprocessing for CdZnTe detectors. Hardware and software interfaces approaches will be shown and also requirements of on-board memory storage and telemetry will be addressed.

  9. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    SciTech Connect

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  10. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 2

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    The Control/Structures Integration Program, a survey of available software for control of flexible structures, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software are discussed.

  11. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  12. Computing the Average Square: An Agent-Based Introduction to Aspects of Current Psychometric Practice

    ERIC Educational Resources Information Center

    Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe

    2011-01-01

    This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…

  13. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  14. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  15. Quantitative Computed Tomography Protocols Affect Material Mapping and Quantitative Computed Tomography-Based Finite-Element Analysis Predicted Stiffness.

    PubMed

    Giambini, Hugo; Dragomir-Daescu, Dan; Nassr, Ahmad; Yaszemski, Michael J; Zhao, Chunfeng

    2016-09-01

    Quantitative computed tomography-based finite-element analysis (QCT/FEA) has become increasingly popular in an attempt to understand and possibly reduce vertebral fracture risk. It is known that scanning acquisition settings affect Hounsfield units (HU) of the CT voxels. Material properties assignments in QCT/FEA, relating HU to Young's modulus, are performed by applying empirical equations. The purpose of this study was to evaluate the effect of QCT scanning protocols on predicted stiffness values from finite-element models. One fresh frozen cadaveric torso and a QCT calibration phantom were scanned six times varying voltage and current and reconstructed to obtain a total of 12 sets of images. Five vertebrae from the torso were experimentally tested to obtain stiffness values. QCT/FEA models of the five vertebrae were developed for the 12 image data resulting in a total of 60 models. Predicted stiffness was compared to the experimental values. The highest percent difference in stiffness was approximately 480% (80 kVp, 110 mAs, U70), while the lowest outcome was ∼1% (80 kVp, 110 mAs, U30). There was a clear distinction between reconstruction kernels in predicted outcomes, whereas voltage did not present a clear influence on results. The potential of QCT/FEA as an improvement to conventional fracture risk prediction tools is well established. However, it is important to establish research protocols that can lead to results that can be translated to the clinical setting. PMID:27428281

  16. Computer modeling of single-cell and multicell thermionic fuel elements

    SciTech Connect

    Dickinson, J.W.; Klein, A.C.

    1996-05-01

    Modeling efforts are undertaken to perform coupled thermal-hydraulic and thermionic analysis for both single-cell and multicell thermionic fuel elements (TFE). The analysis--and the resulting MCTFE computer code (multicell thermionic fuel element)--is a steady-state finite volume model specifically designed to analyze cylindrical TFEs. It employs an interactive successive overrelaxation solution technique to solve for the temperatures throughout the TFE and a coupled thermionic routine to determine the total TFE performance. The calculated results include temperature distributions in all regions of the TFE, axial interelectrode voltages and current densities, and total TFE electrical output parameters including power, current, and voltage. MCTFE-generated results compare experimental data from the single-cell Topaz-II-type TFE and multicell data from the General Atomics 3H5 TFE to benchmark the accuracy of the code methods.

  17. SAGUARO: A finite-element computer program for partially saturated porous flow problems

    NASA Astrophysics Data System (ADS)

    Easton, R. R.; Gartling, D. K.; Larson, D. E.

    1983-11-01

    SAGUARO is finite element computer program designed to calculate two-dimensional flow of mass and energy through porous media. The media may be saturated or partially saturated. SAGUARO solves the parabolic time-dependent mass transport equation which accounts for the presence of partially saturated zones through the use of highly non-linear material characteristic curves. The energy equation accounts for the possibility of partially saturated regions by adjusting the thermal capacitances and thermal conductivities according to the volume fraction of water present in the local pores. Program capabilities, user instructions and a sample problem are presented in this manual.

  18. Some aspects of optimal human-computer symbiosis in multisensor geospatial data fusion

    NASA Astrophysics Data System (ADS)

    Levin, E.; Sergeyev, A.

    Nowadays vast amount of the available geospatial data provides additional opportunities for the targeting accuracy increase due to possibility of geospatial data fusion. One of the most obvious operations is determining of the targets 3D shapes and geospatial positions based on overlapped 2D imagery and sensor modeling. 3D models allows for the extraction of such information about targets, which cannot be measured directly based on single non-fused imagery. Paper describes ongoing research effort at Michigan Tech attempting to combine advantages of human analysts and computer automated processing for efficient human computer symbiosis for geospatial data fusion. Specifically, capabilities provided by integration into geospatial targeting interfaces novel human-computer interaction method such as eye-tracking and EEG was explored. Paper describes research performed and results in more details.

  19. Estimation of the physico-chemical parameters of materials based on rare earth elements with the application of computational model

    NASA Astrophysics Data System (ADS)

    Mamaev, K.; Obkhodsky, A.; Popov, A.

    2016-01-01

    Computational model, technique and the basic principles of operation program complex for quantum-chemical calculations of material's physico-chemical parameters with rare earth elements are discussed. The calculating system is scalable and includes CPU and GPU computational resources. Control and operation of computational jobs and also Globus Toolkit 5 software provides the possibility to join computer users in a unified system of data processing with peer-to-peer architecture. CUDA software is used to integrate graphic processors into calculation system.

  20. Symbolic algorithms for the computation of Moshinsky brackets and nuclear matrix elements

    NASA Astrophysics Data System (ADS)

    Ursescu, D.; Tomaselli, M.; Kuehl, T.; Fritzsche, S.

    2005-12-01

    To facilitate the use of the extended nuclear shell model (NSM), a FERMI module for calculating some of its basic quantities in the framework of MAPLE is provided. The Moshinsky brackets, the matrix elements for several central and non-central interactions between nuclear two-particle states as well as their expansion in terms of Talmi integrals are easily given within a symbolic formulation. All of these quantities are available for interactive work. Program summaryTitle of program:Fermi Catalogue identifier:ADVO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVO Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:None Computer for which the program is designed and others on which is has been tested:All computers with a licence for the computer algebra package MAPLE [Maple is a registered trademark of Waterloo Maple Inc., produced by MapleSoft division of Waterloo Maple Inc.] Instalations:GSI-Darmstadt; University of Kassel (Germany) Operating systems or monitors under which the program has beentested: WindowsXP, Linux 2.4 Programming language used:MAPLE 8 and 9.5 from MapleSoft division of Waterloo Maple Inc. Memory required to execute with typical data:30 MB No. of lines in distributed program including test data etc.:5742 No. of bytes in distributed program including test data etc.:288 939 Distribution program:tar.gz Nature of the physical problem:In order to perform calculations within the nuclear shell model (NSM), a quick and reliable access to the nuclear matrix elements is required. These matrix elements, which arise from various types of forces among the nucleons, can be calculated using Moshinsky's transformation brackets between relative and center-of-mass coordinates [T.A. Brody, M. Moshinsky, Tables of Transformation Brackets, Monografias del Instituto de Fisica, Universidad Nacional Autonoma de Mexico, 1960] and by the proper use of the nuclear states in different coupling notations

  1. Parallel Computations of Natural Convection Flow in a Tall Cavity Using an Explicit Finite Element Method

    SciTech Connect

    Dunn, T.A.; McCallen, R.C.

    2000-10-17

    The Galerkin Finite Element Method was used to predict a natural convection flow in an enclosed cavity. The problem considered was a differentially heated, tall (8:1), rectangular cavity with a Rayleigh number of 3.4 x 10{sup 5} and Prandtl number of 0.71. The incompressible Navier-Stokes equations were solved using a Boussinesq approximation for the buoyancy force. The algorithm was developed for efficient use on massively parallel computer systems. Emphasis was on time-accurate simulations. It was found that the average temperature and velocity values can be captured with a relatively coarse grid, while the oscillation amplitude and period appear to be grid sensitive and require a refined computation.

  2. Computing interaural differences through finite element modeling of idealized human heads

    PubMed Central

    Cai, Tingli; Rakerd, Brad; Hartmann, William M.

    2015-01-01

    Acoustical interaural differences were computed for a succession of idealized shapes approximating the human head-related anatomy: sphere, ellipsoid, and ellipsoid with neck and torso. Calculations were done as a function of frequency (100–2500 Hz) and for source azimuths from 10 to 90 degrees using finite element models. The computations were compared to free-field measurements made with a manikin. Compared to a spherical head, the ellipsoid produced greater large-scale variation with frequency in both interaural time differences and interaural level differences, resulting in better agreement with the measurements. Adding a torso, represented either as a large plate or as a rectangular box below the neck, further improved the agreement by adding smaller-scale frequency variation. The comparisons permitted conjectures about the relationship between details of interaural differences and gross features of the human anatomy, such as the height of the head, and length of the neck. PMID:26428792

  3. SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects

    NASA Technical Reports Server (NTRS)

    Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M

    1998-01-01

    SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.

  4. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.; Kutler, Paul (Technical Monitor)

    1994-01-01

    In recent years significant advances have been made for parallel computers in both hardware and software. Now parallel computers have become viable tools in computational mechanics. Many application codes developed on conventional computers have been modified to benefit from parallel computers. Significant speedups in some areas have been achieved by parallel computations. For single-discipline use of both fluid dynamics and structural dynamics, computations have been made on wing-body configurations using parallel computers. However, only a limited amount of work has been completed in combining these two disciplines for multidisciplinary applications. The prime reason is the increased level of complication associated with a multidisciplinary approach. In this work, procedures to compute aeroelasticity on parallel computers using direct coupling of fluid and structural equations will be investigated for wing-body configurations. The parallel computer selected for computations is an Intel iPSC/860 computer which is a distributed-memory, multiple-instruction, multiple data (MIMD) computer with 128 processors. In this study, the computational efficiency issues of parallel integration of both fluid and structural equations will be investigated in detail. The fluid and structural domains will be modeled using finite-difference and finite-element approaches, respectively. Results from the parallel computer will be compared with those from the conventional computers using a single processor. This study will provide an efficient computational tool for the aeroelastic analysis of wing-body structures on MIMD type parallel computers.

  5. Theoretical and Computational Aspects of the Magnetic Confinement of Particles and Plasmas

    NASA Astrophysics Data System (ADS)

    Mehanian, Courosh

    1987-09-01

    This thesis covers various aspects of the magnetic confinement of particles and plasmas. It is composed of two separate problems which deal with two extreme limits of temperature. In the first problem, the setting is a device that is a candidate for a fusion reactor and thus represents a collection of ionized atoms at a very high temperature. The second problem concerns the magnetic confinement of a neutral hydrogen gas at a temperature low enough that a Bose-Einstein condensation occurs. The tilt stabilization of a spheromak by an energetic particle ring is analyzed. A comprehensive survey is made of numerically generated, hybrid equilibria which describe spheromak plasmas with an energetic ion ring component. Unlike the analytic treatments, neither the ion ring toroidal current nor the inverse aspect ration are required to be small. The tilt stability of the plasma is determined by calculating the torque due to the magnetic interaction with the ion-ring, assumed fixed. The tilt stability of the ring is determined by calculating the betatron frequencies of the ring particles. Bicycle-tire rings, since they flatten the separatix axially, provide the most stabilization of the plasma per unit ion ring current. On the other hand, axially elongated, toilet-paper-tube rings are themselves the most stable. These opposing trends indicate that the configuration with optimal stability is achieved near an ion ring aspect ratio of unity and for roughly equal plasma and fast particle currents. The confinement of an atomic hydrogen gas in the trap formed by a time-varying magnetic field is investigated. The trap uses the interaction of the magnetic field with the magnetic moments of the atoms, which are kept aligned by a strong uniform field. The effect of collisions is included via a Monte Carlo algorithm and it is found that the atoms can be confined when the frequency and the current of the coils producing the time-varying field are appropriately chosen.

  6. Tying Theory To Practice: Cognitive Aspects of Computer Interaction in the Design Process.

    ERIC Educational Resources Information Center

    Mikovec, Amy E.; Dake, Dennis M.

    The new medium of computer-aided design requires changes to the creative problem-solving methodologies typically employed in the development of new visual designs. Most theoretical models of creative problem-solving suggest a linear progression from preparation and incubation to some type of evaluative study of the "inspiration." These models give…

  7. A comparison of turbulence models in computing multi-element airfoil flows

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.

    1994-01-01

    Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.

  8. 2nd International Symposium on Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering (REES-2015)

    NASA Astrophysics Data System (ADS)

    Tavadyan, Levon, Prof; Sachkov, Viktor, Prof; Godymchuk, Anna, Dr.; Bogdan, Anna

    2016-01-01

    The 2nd International Symposium «Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering» (REES2015) was jointly organized by Tomsk State University (Russia), National Academy of Science (Armenia), Shenyang Polytechnic University (China), Moscow Institute of Physics and Engineering (Russia), Siberian Physical-technical Institute (Russia), and Tomsk Polytechnic University (Russia) in September, 7-15, 2015, Belokuriha, Russia. The Symposium provided a high quality of presentations and gathered engineers, scientists, academicians, and young researchers working in the field of rare and rare earth elements mining, modification, separation, elaboration and application, in order to facilitate aggregation and sharing interests and results for a better collaboration and activity visibility. The goal of the REES2015 was to bring researchers and practitioners together to share the latest knowledge on rare and rare earth elements technologies. The Symposium was aimed at presenting new trends in rare and rare earth elements mining, research and separation and recent achievements in advanced materials elaboration and developments for different purposes, as well as strengthening the already existing contacts between manufactures, highly-qualified specialists and young scientists. The topics of the REES2015 were: (1) Problems of extraction and separation of rare and rare earth elements; (2) Methods and approaches to the separation and isolation of rare and rare earth elements with ultra-high purity; (3) Industrial technologies of production and separation of rare and rare earth elements; (4) Economic aspects in technology of rare and rare earth elements; and (5) Rare and rare earth based materials (application in metallurgy, catalysis, medicine, optoelectronics, etc.). We want to thank the Organizing Committee, the Universities and Sponsors supporting the Symposium, and everyone who contributed to the organization of the event and to

  9. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  10. A linear-scaling spectral-element method for computing electrostatic potentials.

    PubMed

    Watson, Mark A; Hirao, Kimihiko

    2008-11-14

    A new linear-scaling method is presented for the fast numerical evaluation of the electronic Coulomb potential. Our approach uses a simple real-space partitioning of the system into cubic cells and a spectral-element representation of the density in a tensorial basis of high-order Chebyshev polynomials. Electrostatic interactions between non-neighboring cells are described using the fast multipole method. The remaining near-field interactions are computed in the tensorial basis as a sum of differential contributions by exploiting the numerical low-rank separability of the Coulomb operator. The method is applicable to arbitrary charge densities, avoids the Poisson equation, and does not involve the solution of any systems of linear equations. Above all, an adaptive resolution of the Chebyshev basis in each cell facilitates the accurate and efficient treatment of molecular systems. We demonstrate the performance of our implementation for quantum chemistry with benchmark calculations on the noble gas atoms, long-chain alkanes, and diamond fragments. We conclude that the spectral-element method can be a competitive tool for the accurate computation of electrostatic potentials in large-scale molecular systems. PMID:19045386

  11. Computational aspects of crack growth in sandwich plates from reinforced concrete and foam

    NASA Astrophysics Data System (ADS)

    Papakaliatakis, G.; Panoskaltsis, V. P.; Liontas, A.

    2012-12-01

    In this work we study the initiation and propagation of cracks in sandwich plates made from reinforced concrete in the boundaries and from a foam polymeric material in the core. A nonlinear finite element approach is followed. Concrete is modeled as an elastoplastic material with its tensile behavior and damage taken into account. Foam is modeled as a crushable, isotropic compressible material. We analyze slabs with a pre-existing macro crack at the position of the maximum bending moment and we study the macrocrack propagation, as well as the condition under which we have crack arrest.

  12. Computational aspects of hot-wire identification of thermal conductivity and diffusivity under high temperature

    NASA Astrophysics Data System (ADS)

    Vala, Jiří; Jarošová, Petra

    2016-07-01

    Development of advanced materials resistant to high temperature, needed namely for the design of heat storage for low-energy and passive buildings, requires simple, inexpensive and reliable methods of identification of their temperature-sensitive thermal conductivity and diffusivity, covering both well-advised experimental setting and implementation of robust and effective computational algorithms. Special geometrical configurations offer a possibility of quasi-analytical evaluation of temperature development for direct problems, whereas inverse problems of simultaneous evaluation of thermal conductivity and diffusivity must be handled carefully, using some least-squares (minimum variance) arguments. This paper demonstrates the proper mathematical and computational approach to such model problem, thanks to the radial symmetry of hot-wire measurements, including its numerical implementation.

  13. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  14. Delta: An object-oriented finite element code architecture for massively parallel computers

    SciTech Connect

    Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.

    1996-02-01

    Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.

  15. Adaptive finite element simulation of flow and transport applications on parallel computers

    NASA Astrophysics Data System (ADS)

    Kirk, Benjamin Shelton

    The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the

  16. An improved method for the automatic mapping of computed tomography numbers onto finite element models.

    PubMed

    Taddei, Fulvia; Pancanti, Alberto; Viceconti, Marco

    2004-01-01

    The assignment of bone tissue material properties is a fundamental step in the generation of subject-specific finite element models from computed tomography data. Aim of the present work is to investigate the influence of the material mapping algorithm on the results predicted by the finite element analysis. Two models, a coarse and a refined one, of a human ileum, femur and tibia, were generated from CT data and used for the tests. In addition a convergence analysis was carried out for the femur model, using six refinement levels, to verify whether the inclusion of the material properties would significantly alter the convergence behaviour of the mesh. The results showed that the choice of the mapping algorithm influences the material distribution. However, this did not always propagate into the finite element results. The difference between the maximum Von Mises stress remained always lower than 10%, apart one case when it reached the 13%. However, the global behaviour of the meshes showed more marked differences between the two algorithms: in the finer meshes of the two long bones 20-30% of the bone volume showed differences in the predicted Von Mises stresses greater than 10%. The convergence behaviour of the model was not worsened by the introduction of inhomogeneous material properties. The software was made available in the public domain. PMID:14644599

  17. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different

  18. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  19. Precise Boundary Element Computation of Protein Transport Properties: Diffusion Tensors, Specific Volume, and Hydration

    PubMed Central

    Aragon, Sergio; Hahn, David K.

    2006-01-01

    A precise boundary element method for the computation of hydrodynamic properties has been applied to the study of a large suite of 41 soluble proteins ranging from 6.5 to 377 kDa in molecular mass. A hydrodynamic model consisting of a rigid protein excluded volume, obtained from crystallographic coordinates, surrounded by a uniform hydration thickness has been found to yield properties in excellent agreement with experiment. The hydration thickness was determined to be δ = 1.1 ± 0.1 Å. Using this value, standard deviations from experimental measurements are: 2% for the specific volume; 2% for the translational diffusion coefficient, and 6% for the rotational diffusion coefficient. These deviations are comparable to experimental errors in these properties. The precision of the boundary element method allows the unified description of all of these properties with a single hydration parameter, thus far not achieved with other methods. An approximate method for computing transport properties with a statistical precision of 1% or better (compared to 0.1–0.2% for the full computation) is also presented. We have also estimated the total amount of hydration water with a typical −9% deviation from experiment in the case of monomeric proteins. Both the water of hydration and the more precise translational diffusion data hint that some multimeric proteins may not have the same solution structure as that in the crystal because the deviations are systematic and larger than in the monomeric case. On the other hand, the data for monomeric proteins conclusively show that there is no difference in the protein structure going from the crystal into solution. PMID:16714342

  20. Methodological aspects of in vitro assessment of bio-accessible risk element pool in urban particulate matter.

    PubMed

    Sysalová, Jiřina; Száková, Jiřina; Tremlová, Jana; Kašparovská, Kateřina; Kotlík, Bohumil; Tlustoš, Pavel; Svoboda, Petr

    2014-11-01

    In vitro tests simulating the elements release from inhaled urban particulate matter (PM) with artificial lung fluids (Gamble's and Hatch's solutions) and simulated gastric and pancreatic solutions were applied for an estimation of hazardous element (As, Cd, Cr, Hg, Mn, Ni, Pb and Zn) bio-accessibility in this material. An inductively coupled plasma optical emission spectrometry (ICP-OES) and an inductively coupled plasma mass spectrometry (ICP-MS) were employed for the element determination in extracted solutions. The effect of the extraction agent used, extraction time, sample-to-extractant ratio, sample particle size and/or individual element properties was evaluated. Different patterns of individual elements were observed, comparing Hatch's solution vs. simulated gastric and pancreatic solutions. For Hatch's solution, a decreasing sample-to-extractant ratio in a PM size fraction of <0.063 mm resulted in increasing leached contents of all investigated elements. As already proved for other operationally defined extraction procedures, the extractable element portions are affected not only by their mobility in the particulate matter itself but also by the sample preparation procedure. Results of simulated in vitro tests can be applied for the reasonable estimation of bio-accessible element portions in the particulate matter as an alternative method, which, consequently, initiates further examinations including potential in vivo assessments. PMID:25123460

  1. Intelligent computer-aided diagnosis system for breast MRI combining kinetic and morphological aspects

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; Meyer-Bäse, Anke; Lange, Oliver

    2008-04-01

    An intelligent medical systems based on a radial basis neural network is applied to the automatic classification of suspicious lesions in breast MRI and compared with two standard mammographic reading methods. Such systems represent an important component of future sophisticated computer-aided diagnosis systems and enable the extraction of spatial and temporal features of dynamic MRI data stemming from patients with confirmed lesion diagnosis. Intelligent medical systems combining both kinetics and lesions' morphology are expected to have substantial implications in healthcare politics by contributing to the diagnosis of indeterminate breast lesions by non-invasive imaging.

  2. Some computational aspects of the hals (harmonic analysis of x-ray line shape) method

    SciTech Connect

    Moshkina, T.I.; Nakhmanson, M.S.

    1986-02-01

    This paper discusses the problem of distinguishing the analytical line from the background and approximates the background component. One of the constituent parts of the program package in the procedural-mathematical software for x-ray investigations of polycrystalline substances in application to the DRON-3, DRON-2 and ADP-1 diffractometers is the SSF system of programs, which is designed for determining the parameters of the substructure of materials. The SSF system is tailored not only to Unified Series (ES) computers, but also to the M-6000 and SM-1 minicomputers.

  3. Computer-Aided Drug Design (CADD): Methodological Aspects and Practical Applications in Cancer Research

    NASA Astrophysics Data System (ADS)

    Gianti, Eleonora

    Computer-Aided Drug Design (CADD) has deservedly gained increasing popularity in modern drug discovery (Schneider, G.; Fechner, U. 2005), whether applied to academic basic research or the pharmaceutical industry pipeline. In this work, after reviewing theoretical advancements in CADD, we integrated novel and stateof- the-art methods to assist in the design of small-molecule inhibitors of current cancer drug targets, specifically: Androgen Receptor (AR), a nuclear hormone receptor required for carcinogenesis of Prostate Cancer (PCa); Signal Transducer and Activator of Transcription 5 (STAT5), implicated in PCa progression; and Epstein-Barr Nuclear Antigen-1 (EBNA1), essential to the Epstein Barr Virus (EBV) during latent infections. Androgen Receptor. With the aim of generating binding mode hypotheses for a class (Handratta, V.D. et al. 2005) of dual AR/CYP17 inhibitors (CYP17 is a key enzyme for androgens biosynthesis and therefore implicated in PCa development), we successfully implemented a receptor-based computational strategy based on flexible receptor docking (Gianti, E.; Zauhar, R.J. 2012). Then, with the ultimate goal of identifying novel AR binders, we performed Virtual Screening (VS) by Fragment-Based Shape Signatures, an improved version of the original method developed in our Laboratory (Zauhar, R.J. et al. 2003), and we used the results to fully assess the high-level performance of this innovative tool in computational chemistry. STAT5. The SRC Homology 2 (SH2) domain of STAT5 is responsible for phospho-peptide recognition and activation. As a keystone of Structure-Based Drug Design (SBDD), we characterized key residues responsible for binding. We also generated a model of STAT5 receptor bound to a phospho-peptide ligand, which was validated by docking publicly known STAT5 inhibitors. Then, we performed Shape Signatures- and docking-based VS of the ZINC database (zinc.docking.org), followed by Molecular Mechanics Generalized Born Surface Area (MMGBSA

  4. Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Houck, J. A.

    1976-01-01

    A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.

  5. Computational analysis of noise reduction devices in axial fans with stabilized finite element formulations

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Sheard, A. G.; Tezduyar, T. E.

    2012-12-01

    The paper illustrates how a computational fluid mechanic technique, based on stabilized finite element formulations, can be used in analysis of noise reduction devices in axial fans. Among the noise control alternatives, the study focuses on the use of end-plates fitted at the blade tips to control the leakage flow and the related aeroacoustic sources. The end-plate shape is configured to govern the momentum transfer to the swirling flow at the blade tip. This flow control mechanism has been found to have a positive link to the fan aeroacoustics. The complex physics of the swirling flow at the tip, developing under the influence of the end-plate, is governed by the rolling up of the jet-like leakage flow. The RANS modelling used in the computations is based on the streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin methods, supplemented with the DRDJ stabilization. Judicious determination of the stabilization parameters involved is also a part of our computational technique and is described for each component of the stabilized formulation. We describe the flow physics underlying the design of the noise control device and illustrate the aerodynamic performance. Then we investigate the numerical performance of the formulation by analysing the inner workings of the stabilization operators and of their interaction with the turbulence model.

  6. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

    SciTech Connect

    Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.

    1996-09-01

    This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

  7. Use of SNP-arrays for ChIP assays: computational aspects.

    PubMed

    Muro, Enrique M; McCann, Jennifer A; Rudnicki, Michael A; Andrade-Navarro, Miguel A

    2009-01-01

    The simultaneous genotyping of thousands of single nucleotide polymorphisms (SNPs) in a genome using SNP-Arrays is a very important tool that is revolutionizing genetics and molecular biology. We expanded the utility of this technique by using it following chromatin immunoprecipitation (ChIP) to assess the multiple genomic locations protected by a protein complex recognized by an antibody. The power of this technique is illustrated through an analysis of the changes in histone H4 acetylation, a marker of open chromatin and transcriptionally active genomic regions, which occur during differentiation of human myoblasts into myotubes. The findings have been validated by the observation of a significant correlation between the detected histone modifications and the expression of the nearby genes, as measured by DNA expression microarrays. This chapter focuses on the computational analysis of the data. PMID:19588091

  8. FEMAX finite-element package for computing three-dimensional time-domain electromagnetic fields in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Mur, G.

    An efficient and accurate finite-element package is described for computing transient as well as time-harmonic three-dimensional electromagnetic fields in inhomogeneous media. For the expansion of the field in an inhomogeneous configuration, edge elements are used along the interfaces between media with different medium properties to allow for the continuity conditions of the field across these interfaces, nodal elements are used in the remaining homogeneous subdomains. In the domain of computation the package decides locally what type of element has to be used for obtaining the user-specified accuracy of modeling the field. In this way optimum results are obtained both in regard to computational efficiency and in regard to desired accuracy. The electromagnetic compatibility relations are implemented for avoiding spurious solutions.

  9. Three-Dimensional Effects in Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; LeeReusch, Elizabeth M.; Watson, Ralph D.

    2003-01-01

    In an effort to discover the causes for disagreement between previous two-dimensional (2-D) computations and nominally 2-D experiment for flow over the three-element McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, documents venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side-wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using three-dimensional (3-D) structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects on the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of an off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too early or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower the lift levels near maximum lift conditions.

  10. A study of equation solvers for linear and non-linear finite element analysis on parallel processing computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Kamat, Manohar P.

    1992-01-01

    Concurrent computing environments provide the means to achieve very high performance for finite element analysis of systems, provided the algorithms take advantage of multiple processors. The authors have examined several algorithms for both linear and nonlinear finite element analysis. The performance of these algorithms on an Alliant FX/80 parallel supercomputer has been studied. For single load case linear analysis, the optimal solution algorithm is strongly problem dependent. For multiple load cases or nonlinear analysis through a modified Newton-Raphson method, decomposition algorithms are shown to have a decided advantage over element-by-element preconditioned conjugate gradient algorithms.

  11. Preprocessor and postprocessor computer programs for a radial-flow finite-element model

    USGS Publications Warehouse

    Pucci, A.A., Jr.; Pope, D.A.

    1987-01-01

    Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)

  12. Computational aspects of the nonlinear normal mode initialization of the GLAS 4th order GCM

    NASA Technical Reports Server (NTRS)

    Navon, I. M.; Bloom, S. C.; Takacs, L.

    1984-01-01

    Using the normal modes of the GLAS 4th Order Model, a Machenhauer nonlinear normal mode initialization (NLNMI) was carried out for the external vertical mode using the GLAS 4th Order shallow water equations model for an equivalent depth corresponding to that associated with the external vertical mode. A simple procedure was devised which was directed at identifying computational modes by following the rate of increase of BAL sub M, the partial (with respect to the zonal wavenumber m) sum of squares of the time change of the normal mode coefficients (for fixed vertical mode index) varying over the latitude index L of symmetric or antisymmetric gravity waves. A working algorithm is presented which speeds up the convergence of the iterative Machenhauer NLNMI. A 24 h integration using the NLNMI state was carried out using both Matsuno and leap-frog time-integration schemes; these runs were then compared to a 24 h integration starting from a non-initialized state. The maximal impact of the nonlinear normal mode initialization was found to occur 6-10 hours after the initial time.

  13. RELATIONSHIP BETWEEN RIGIDITY OF EXTERNAL FIXATOR AND NUMBER OF PINS: COMPUTER ANALYSIS USING FINITE ELEMENTS

    PubMed Central

    Sternick, Marcelo Back; Dallacosta, Darlan; Bento, Daniela Águida; do Reis, Marcelo Lemos

    2015-01-01

    Objective: To analyze the rigidity of a platform-type external fixator assembly, according to different numbers of pins on each clamp. Methods: Computer simulation on a large-sized Cromus dynamic external fixator (Baumer SA) was performed using a finite element method, in accordance with the standard ASTM F1541. The models were generated with approximately 450,000 quadratic tetrahedral elements. Assemblies with two, three and four Schanz pins of 5.5 mm in diameter in each clamp were compared. Every model was subjected to a maximum force of 200 N, divided into 10 sub-steps. For the components, the behavior of the material was assumed to be linear, elastic, isotropic and homogeneous. For each model, the rigidity of the assembly and the Von Mises stress distribution were evaluated. Results: The rigidity of the system was 307.6 N/mm for two pins, 369.0 N/mm for three and 437.9 N/mm for four. Conclusion: The results showed that four Schanz pins in each clamp promoted rigidity that was 19% greater than in the configuration with three pins and 42% greater than with two pins. Higher tension occurred in configurations with fewer pins. In the models analyzed, the maximum tension occurred on the surface of the pin, close to the fixation area. PMID:27047879

  14. Predicting mouse vertebra strength with micro-computed tomography-derived finite element analysis.

    PubMed

    Nyman, Jeffry S; Uppuganti, Sasidhar; Makowski, Alexander J; Rowland, Barbara J; Merkel, Alyssa R; Sterling, Julie A; Bredbenner, Todd L; Perrien, Daniel S

    2015-01-01

    As in clinical studies, finite element analysis (FEA) developed from computed tomography (CT) images of bones are useful in pre-clinical rodent studies assessing treatment effects on vertebral body (VB) strength. Since strength predictions from microCT-derived FEAs (μFEA) have not been validated against experimental measurements of mouse VB strength, a parametric analysis exploring material and failure definitions was performed to determine whether elastic μFEAs with linear failure criteria could reasonably assess VB strength in two studies, treatment and genetic, with differences in bone volume fraction between the control and the experimental groups. VBs were scanned with a 12-μm voxel size, and voxels were directly converted to 8-node, hexahedral elements. The coefficient of determination or R (2) between predicted VB strength and experimental VB strength, as determined from compression tests, was 62.3% for the treatment study and 85.3% for the genetic study when using a homogenous tissue modulus (E t) of 18 GPa for all elements, a failure volume of 2%, and an equivalent failure strain of 0.007. The difference between prediction and measurement (that is, error) increased when lowering the failure volume to 0.1% or increasing it to 4%. Using inhomogeneous tissue density-specific moduli improved the R (2) between predicted and experimental strength when compared with uniform E t=18 GPa. Also, the optimum failure volume is higher for the inhomogeneous than for the homogeneous material definition. Regardless of model assumptions, μFEA can assess differences in murine VB strength between experimental groups when the expected difference in strength is at least 20%. PMID:25908967

  15. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  16. Predicting mouse vertebra strength with micro-computed tomography-derived finite element analysis

    PubMed Central

    Nyman, Jeffry S; Uppuganti, Sasidhar; Makowski, Alexander J; Rowland, Barbara J; Merkel, Alyssa R; Sterling, Julie A; Bredbenner, Todd L; Perrien, Daniel S

    2015-01-01

    As in clinical studies, finite element analysis (FEA) developed from computed tomography (CT) images of bones are useful in pre-clinical rodent studies assessing treatment effects on vertebral body (VB) strength. Since strength predictions from microCT-derived FEAs (μFEA) have not been validated against experimental measurements of mouse VB strength, a parametric analysis exploring material and failure definitions was performed to determine whether elastic μFEAs with linear failure criteria could reasonably assess VB strength in two studies, treatment and genetic, with differences in bone volume fraction between the control and the experimental groups. VBs were scanned with a 12-μm voxel size, and voxels were directly converted to 8-node, hexahedral elements. The coefficient of determination or R2 between predicted VB strength and experimental VB strength, as determined from compression tests, was 62.3% for the treatment study and 85.3% for the genetic study when using a homogenous tissue modulus (Et) of 18 GPa for all elements, a failure volume of 2%, and an equivalent failure strain of 0.007. The difference between prediction and measurement (that is, error) increased when lowering the failure volume to 0.1% or increasing it to 4%. Using inhomogeneous tissue density-specific moduli improved the R2 between predicted and experimental strength when compared with uniform Et=18 GPa. Also, the optimum failure volume is higher for the inhomogeneous than for the homogeneous material definition. Regardless of model assumptions, μFEA can assess differences in murine VB strength between experimental groups when the expected difference in strength is at least 20%. PMID:25908967

  17. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  18. Computer simulations and theoretical aspects of the depletion interaction in protein-oligomer mixtures

    NASA Astrophysics Data System (ADS)

    Bončina, M.; Reščič, J.; Kalyuzhnyi, Yu. V.; Vlachy, V.

    2007-07-01

    The depletion interaction between proteins caused by addition of either uncharged or partially charged oligomers was studied using the canonical Monte Carlo simulation technique and the integral equation theory. A protein molecule was modeled in two different ways: either as (i) a hard sphere of diameter 30.0Å with net charge 0, or +5, or (ii) as a hard sphere with discrete charges (depending on the pH of solution) of diameter 45.4Å. The oligomers were pictured as tangentially jointed, uncharged, or partially charged, hard spheres. The ions of a simple electrolyte present in solution were represented by charged hard spheres distributed in the dielectric continuum. In this study we were particularly interested in changes of the protein-protein pair-distribution function, caused by addition of the oligomer component. In agreement with previous studies we found that addition of a nonadsorbing oligomer reduces the phase stability of solution, which is reflected in the shape of the protein-protein pair-distribution function. The value of this function in protein-protein contact increases with increasing oligomer concentration, and is larger for charged oligomers. The range of the depletion interaction and its strength also depend on the length (number of monomer units) of the oligomer chain. The integral equation theory, based on the Wertheim Ornstein-Zernike approach applied in this study, was found to be in fair agreement with Monte Carlo results only for very short oligomers. The computer simulations for a model mimicking the lysozyme molecule (ii) are in qualitative agreement with small-angle neutron experiments for lysozyme-dextran mixtures.

  19. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  20. Computational modeling of chemo-electro-mechanical coupling: A novel implicit monolithic finite element approach

    PubMed Central

    Wong, J.; Göktepe, S.; Kuhl, E.

    2014-01-01

    Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328

  1. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.

    2016-05-01

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  2. COYOTE : a finite element computer program for nonlinear heat conduction problems. Part I, theoretical background.

    SciTech Connect

    Glass, Micheal W.; Hogan, Roy E., Jr.; Gartling, David K.

    2010-03-01

    The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction problems

  3. Integrability aspects and soliton solutions for the inhomogeneous reduced Maxwell-Bloch system in nonlinear optics with symbolic computation

    NASA Astrophysics Data System (ADS)

    Hao, Hui-Qin; Zhang, Jian-Wen

    2015-05-01

    In this paper, we investigate the inhomogeneous reduced Maxwell-Bloch system, which describes the propagation of the intense ultra-short optical pulses through an inhomogeneous two-level dielectric medium. Through symbolic computation, the integrability aspects including the Painlevé integrable condition, Lax pair and infinite conservation laws are derived. By virtue of the Darboux transformation method, one- and two-soliton solutions are generated on the nonvanishing background, including the bright solitons, dark solitons, periodic solutions and some two-soliton solutions. The asymptotic analysis method is performed to verify the elastic interaction between two solitons. Furthermore, by virtue of some figures, the dynamic properties of those solitons are discussed. The results may be useful in the study of the ultrashort pulses propagation in such situations as the model of the two-level dielectric media.

  4. A new algorithm for computing primitive elements in GF q square

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1978-01-01

    A new method is developed to find primitive elements in the Galois field of sq q elements GF(sqq), where q is a Mersenne prime. Such primitive elements are needed to implement transforms over GF(sq q).

  5. Computed tomography imaging spectrometer by using a novel hybrid diffractive-refractive element

    NASA Astrophysics Data System (ADS)

    Fan, Dongdong; Wu, Minxian; He, Qingsheng; Yong, Tao; Wei, Haoyun

    2001-09-01

    The multi-spectal or hyper-spectral imager provides three dimensional description for spatial and spectral intensity distribution of objective scenes and it can be a powerful tool for remote sensing in much application. There are several approaches to collection three dimensional data cube of image, and most of them require a scanning mechanism. Therefore those methods are difficult to record both spectral and spatial information of a dynamic scene such as a missile in flight, changing rapidly red tide, and so forth. Computed Tomography Imaging Spectrometer (CTIS) is a new branch application of computed tomography technology. The CTISs with different type configuration have been reported by a few of author. Two main types of CTISs are proposed. One is to use the two dimension gratings like Dammann gratings, the two 1-D gratings placed in orthogonal way, or three 1-D gratings separated by 60 degree. Due to its temporally and spatially non-scanning technique, this type is capable of capturing the flash events and can be used for instantaneous spectral imaging. The main problem of these CTISs is that the diffractive efficiencies are not only depended on the various wavelength but also on the different diffractive orders. That will effect the reconstruction algorithm and its results. These problems lead to reduce the signal to noise and dynamic range of spectral imaging system. The other is to take the approaches such as rotational spectro-tomography, or grating combined with a rotational direct-view prism. The advantages of these approaches are a)high throughput and b)easy to obtain more uniform data of different projection, but its disadvantage is obvious that the moving parts must be adopted. In our work, a principle and configuration of CTIS is indicated, and especially, a novel hybrid diffractive-refractive element is proposed, which is a combination of an array of optical prisms and one dimension holographic gratings. It can provide the uniformity of performance

  6. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  7. Using Finite Volume Element Definitions to Compute the Gravitation of Irregular Small Bodies

    NASA Astrophysics Data System (ADS)

    Zhao, Y. H.; Hu, S. C.; Wang, S.; Ji, J. H.

    2015-03-01

    In the orbit design procedure of the small bodies exploration missions, it's important to take the effect of the gravitation of the small bodies into account. However, a majority of the small bodies in the solar system are irregularly shaped with non-uniform density distribution which makes it difficult to precisely calculate the gravitation of these bodies. This paper proposes a method to model the gravitational field of an irregularly shaped small body and calculate the corresponding spherical harmonic coefficients. This method is based on the shape of the small bodies resulted from the light curve data via observation, and uses finite volume element to approximate the body shape. The spherical harmonic parameters could be derived numerically by computing the integrals according to their definition. Comparison with the polyhedral method is shown in our works. We take the asteroid (433) Eros as an example. Spherical harmonic coefficients resulted from this method are compared with the results derived from the track data obtained by NEAR (Near-Earth Asteroid Rendezvous) detector. The comparison shows that the error of C_{20} is less than 2%. The spherical harmonic coefficients of (1996) FG3 which is a selected target in our future exploration mission are computed. Taking (4179) Toutatis, the target body in Chang'e 2's flyby mission, for example, the gravitational field is calculated combined with the shape model from radar data, which provides theoretical basis for analyzing the soil distribution and flow from the optical image obtained in the mission. This method is applied to uneven density distribution objects, and could be used to provide reliable gravity field data of small bodies for orbit design and landing in the future exploration missions.

  8. Three dimensional automatic refinement method for transient small strain elastoplastic finite element computations

    NASA Astrophysics Data System (ADS)

    Biotteau, E.; Gravouil, A.; Lubrecht, A. A.; Combescure, A.

    2012-01-01

    In this paper, the refinement strategy based on the "Non-Linear Localized Full MultiGrid" solver originally published in Int. J. Numer. Meth. Engng 84(8):947-971 (2010) for 2-D structural problems is extended to 3-D simulations. In this context, some extra information concerning the refinement strategy and the behavior of the error indicators are given. The adaptive strategy is dedicated to the accurate modeling of elastoplastic materials with isotropic hardening in transient dynamics. A multigrid solver with local mesh refinement is used to reduce the amount of computational work needed to achieve an accurate calculation at each time step. The locally refined grids are automatically constructed, depending on the user prescribed accuracy. The discretization error is estimated by a dedicated error indicator within the multigrid method. In contrast to other adaptive procedures, where grids are erased when new ones are generated, the previous solutions are used recursively to reduce the computing time on the new mesh. Moreover, the adaptive strategy needs no costly coarsening method as the mesh is reassessed at each time step. The multigrid strategy improves the convergence rate of the non-linear solver while ensuring the information transfer between the different meshes. It accounts for the influence of localized non-linearities on the whole structure. All the steps needed to achieve the adaptive strategy are automatically performed within the solver such that the calculation does not depend on user experience. This paper presents three-dimensional results using the adaptive multigrid strategy on elastoplastic structures in transient dynamics and in a linear geometrical framework. Isoparametric cubic elements with energy and plastic work error indicators are used during the calculation.

  9. Computational optical palpation: micro-scale force mapping using finite-element methods (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wijesinghe, Philip; Sampson, David D.; Kennedy, Brendan F.

    2016-03-01

    Accurate quantification of forces, applied to, or generated by, tissue, is key to understanding many biomechanical processes, fabricating engineered tissues, and diagnosing diseases. Many techniques have been employed to measure forces; in particular, tactile imaging - developed to spatially map palpation-mimicking forces - has shown potential in improving the diagnosis of cancer on the macro-scale. However, tactile imaging often involves the use of discrete force sensors, such as capacitive or piezoelectric sensors, whose spatial resolution is often limited to 1-2 mm. Our group has previously presented a type of tactile imaging, termed optical palpation, in which the change in thickness of a compliant layer in contact with tissue is measured using optical coherence tomography, and surface forces are extracted, with a micro-scale spatial resolution, using a one-dimensional spring model. We have also recently combined optical palpation with compression optical coherence elastography (OCE) to quantify stiffness. A main limitation of this work, however, is that a one-dimensional spring model is insufficient in describing the deformation of mechanically heterogeneous tissue with uneven boundaries, generating significant inaccuracies in measured forces. Here, we present a computational, finite-element method, which we term computational optical palpation. In this technique, by knowing the non-linear mechanical properties of the layer, and from only the axial component of displacement measured by phase-sensitive OCE, we can estimate, not only the axial forces, but the three-dimensional traction forces at the layer-tissue interface. We use a non-linear, three-dimensional model of deformation, which greatly increases the ability to accurately measure force and stiffness in complex tissues.

  10. Applications of the Space-Time Conservation Element and Solution Element (CE/SE) Method to Computational Aeroacoustic Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.

  11. Validation of finite element computations for the quantitative prediction of underwater noise from impact pile driving.

    PubMed

    Zampolli, Mario; Nijhof, Marten J J; de Jong, Christ A F; Ainslie, Michael A; Jansen, Erwin H W; Quesson, Benoit A J

    2013-01-01

    The acoustic radiation from a pile being driven into the sediment by a sequence of hammer strikes is studied with a linear, axisymmetric, structural acoustic frequency domain finite element model. Each hammer strike results in an impulsive sound that is emitted from the pile and then propagated in the shallow water waveguide. Measurements from accelerometers mounted on the head of a test pile and from hydrophones deployed in the water are used to validate the model results. Transfer functions between the force input at the top of the anvil and field quantities, such as acceleration components in the structure or pressure in the fluid, are computed with the model. These transfer functions are validated using accelerometer or hydrophone measurements to infer the structural forcing. A modeled hammer forcing pulse is used in the successive step to produce quantitative predictions of sound exposure at the hydrophones. The comparison between the model and the measurements shows that, although several simplifying assumptions were made, useful predictions of noise levels based on linear structural acoustic models are possible. In the final part of the paper, the model is used to characterize the pile as an acoustic radiator by analyzing the flow of acoustic energy. PMID:23297884

  12. Dust emission modelling around a stockpile by using computational fluid dynamics and discrete element method

    NASA Astrophysics Data System (ADS)

    Derakhshani, S. M.; Schott, D. L.; Lodewijks, G.

    2013-06-01

    Dust emissions can have significant effects on the human health, environment and industry equipment. Understanding the dust generation process helps to select a suitable dust preventing approach and also is useful to evaluate the environmental impact of dust emission. To describe these processes, numerical methods such as Computational Fluid Dynamics (CFD) are widely used, however nowadays particle based methods like Discrete Element Method (DEM) allow researchers to model interaction between particles and fluid flow. In this study, air flow over a stockpile, dust emission, erosion and surface deformation of granular material in the form of stockpile are studied by using DEM and CFD as a coupled method. Two and three dimensional simulations are respectively developed for CFD and DEM methods to minimize CPU time. The standard κ-ɛ turbulence model is used in a fully developed turbulent flow. The continuous gas phase and the discrete particle phase link to each other through gas-particle void fractions and momentum transfer. In addition to stockpile deformation, dust dispersion is studied and finally the accuracy of stockpile deformation results obtained by CFD-DEM modelling will be validated by the agreement with the existing experimental data.

  13. A hybrid FPGA/Tilera compute element for autonomous hazard detection and navigation

    NASA Astrophysics Data System (ADS)

    Villalpando, C. Y.; Werner, R. A.; Carson, J. M.; Khanoyan, G.; Stern, R. A.; Trawny, N.

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  14. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  15. Java and Vector Graphics Tools for Element Production Calculations in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Lingerfelt, Eric; McMahon, Erin; Hix, Raph; Guidry, Mike; Smith, Michael

    2002-08-01

    We are developing a set of extendable, cross-platform tools and interfaces using Java and vector technologies such as SVG and SWF to facilitate element production calculations in computational astrophysics. The Java technologies are customizable and portable, and can be utilized as a stand-alone application or distributed across a network. These tools, which can have a broad applications in general scientific visualization, are currently being used to explore and compare various reaction rates, set up and run explosive nucleosynthesis calculations, and visualize these results with compact, high quality vector graphics. The facilities for reading and plotting nuclear reaction rates and their components from a network or library permit the user to include new rates and adjust current ones. Setup and initialization of a nucleosynthesis calculation is through an intuitive graphical interface. Sophisticated visualization and graphical analysis tools offer the ability to view results in an interactive, scalable vector graphics format, which leads to a dramatic reduction in visualization file sizes while maintaining high visual quality and interactive control. The use of these tools for other applications will also be mentioned.

  16. A Hybrid FPGA/Tilera Compute Element for Autonomous Hazard Detection and Navigation

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Werner, Robert A.; Carson, John M., III; Khanoyan, Garen; Stern, Ryan A.; Trawny, Nikolas

    2013-01-01

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  17. [Numerical finite element modeling of custom car seat using computer aided design].

    PubMed

    Huang, Xuqi; Singare, Sekou

    2014-02-01

    A good cushion can not only provide the sitter with a high comfort, but also control the distribution of the hip pressure to reduce the incidence of diseases. The purpose of this study is to introduce a computer-aided design (CAD) modeling method of the buttocks-cushion using numerical finite element (FE) simulation to predict the pressure distribution on the buttocks-cushion interface. The buttock and the cushion model geometrics were acquired from a laser scanner, and the CAD software was used to create the solid model. The FE model of a true seated individual was developed using ANSYS software (ANSYS Inc, Canonsburg, PA). The model is divided into two parts, i.e. the cushion model made of foam and the buttock model represented by the pelvis covered with a soft tissue layer. Loading simulations consisted of imposing a vertical force of 520N on the pelvis, corresponding to the weight of the user upper extremity, and then solving iteratively the system. PMID:24804486

  18. A hybrid computational approach for the interactions between river flow and porous sediment bed covered with large roughness elements

    NASA Astrophysics Data System (ADS)

    Liu, X.

    2013-12-01

    In many natural and human-impacted rivers, the porous sediment beds are either fully or partially covered by large roughness elements, such as gravels and boulders. The existence of these large roughness elements, which are in direct contact with the turbulent river flow, changes the dynamics of mass and momentum transfer across the river bed. It also impacts the overall hydraulics in the river channel and over time, indirectly influences the geomorphological evolution of the system. Ideally, one should resolve each of these large roughness elements in a computational fluid model. This approach is apparently not feasible due to the prohibitive computational cost. Considering a typical river bed with armoring, the distribution of sediment sizes usually shows significant vertical variations. Computationally, it poses great challenge to resolve all the size scales. Similar multiscale problem exists in the much broader porous media flow field. To cope with this, we propose a hybrid computational approach where the large surface roughness elements are resolved using immersed boundary method and sediment layers below (usually finer) are modeled by adding extra drag terms in momentum equations. Large roughness elements are digitized using a 3D laser scanner. They are put into the computational domain using the collision detection and rigid body dynamics algorithms which guarantees realistic and physically-correct spatial arrangement of the surface elements. Simulation examples have shown the effectiveness of the hybrid approach which captures the effect of the surface roughness on the turbulent flow as well as the hyporheic flow pattern in and out of the bed.

  19. Computer Security Systems Enable Access.

    ERIC Educational Resources Information Center

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  20. Lateral-torsional buckling analysis of I-beams using shell finite elements and nonlinear computation methods

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk; Kala, Jiří

    2012-09-01

    The paper deals with the influence of correlation length, of Gauss random field, and of yield strength of a hotrolled I-beam under bending on the ultimate load carrying capacity limit state. Load carrying capacity is an output random quantity depending on input random imperfections. Latin Hypercube Sampling Method is used for sampling simulation. Load carrying capacity is computed by the programme ANSYS using shell finite elements and nonlinear computation methods. The nonlinear FEM computation model takes into consideration the effect of lateral-torsional buckling on the ultimate limit state.

  1. Numerical computation of transonic flows by finite-element and finite-difference methods

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Wellford, L. C.; Merkle, C. L.; Murman, E. M.

    1978-01-01

    Studies on applications of the finite element approach to transonic flow calculations are reported. Different discretization techniques of the differential equations and boundary conditions are compared. Finite element analogs of Murman's mixed type finite difference operators for small disturbance formulations were constructed and the time dependent approach (using finite differences in time and finite elements in space) was examined.

  2. Synthetic, Spectroscopic and Biocidal Aspects of Heterobimetallic Complexes Comprising Platinum(II) and a Group Four or Fourteen Element

    PubMed Central

    Sharma, Kripa

    2000-01-01

    Heterobimetallic complexes with varying amines have been synthesized by the reaction of [Pt(C2H8N2)2]Cl2 with group four or fourteen organometallic dichlorides, viz., R2MCl2 and Cp2M'Cl2 in a 1:2 molar ratio in MeOH (where M=Si or Sn, M'= Ti or Zr and R=Ph or Me). These complexes have been characterized by elemental analysis, molecular weight determinations, magnetic measurements, conductance, IR, 1H NMR and electronic spectra. The spectral data suggest a square planar geometry for all the complexes. Conductivity data suggest that they behave as electrolytes. These monometallic precursors along with their complexes have been screened in vitro against a number of pathogenic fungi and bacteria to assess their growth inhibiting potential. PMID:18475917

  3. The computation of ionization potentials for second-row elements by ab initio and density functional theory methods

    SciTech Connect

    Jursic, B.S.

    1996-12-31

    Up to four ionization potentials of elements from the second-row of the periodic table were computed using the ab initio (HF, MP2, MP3, MP4, QCISD, GI, G2, and G2MP2) and DFT (B3LY, B3P86, B3PW91, XALPHA, HFS, HFB, BLYP, BP86, BPW91, BVWN, XAPLY, XAP86, XAPW91, XAVWN, SLYR SP86, SPW91 and SVWN) methods. In all of the calculations, the large 6-311++G(3df,3pd) gaussian type of basis set was used. The computed values were compared with the experimental results and suitability of the ab initio and DFF methods were discussed, in regard to reproducing the experimental data. From the computed ionization potentials of the second-row elements, it can be concluded that the HF ab initio computation is not capable of reproducing the experimental results. The computed ionization potentials are too low. However, by using the ab initio methods that include electron correlation, the computed IPs are becoming much closer to the experimental values. In all cases, with the exception of the first ionization potential for oxygen, the G2 computation result produces ionization potentials that are indistinguishable from the experimental results.

  4. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1993-01-01

    This paper presents a procedure for computing the aeroelasticity of wing-body configurations on multiple-instruction, multiple-data (MIMD) parallel computers. In this procedure, fluids are modeled using Euler equations discretized by a finite difference method, and structures are modeled using finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. A parallel integration scheme is used to compute aeroelastic responses by solving the coupled fluid and structural equations concurrently while keeping modularity of each discipline. The present procedure is validated by computing the aeroelastic response of a wing and comparing with experiment. Aeroelastic computations are illustrated for a High Speed Civil Transport type wing-body configuration.

  5. Finite element computation of a viscous compressible free shear flow governed by the time dependent Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Blanchard, D. K.

    1975-01-01

    A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.

  6. The computational structural mechanics testbed generic structural-element processor manual

    NASA Technical Reports Server (NTRS)

    Stanley, Gary M.; Nour-Omid, Shahram

    1990-01-01

    The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).

  7. Positive and Negative Aspects of the IWB and Tablet Computers in the First Grade of Primary School: A Multiple-Perspective Approach

    ERIC Educational Resources Information Center

    Fekonja-Peklaj, Urška; Marjanovic-Umek, Ljubica

    2015-01-01

    The aim of this qualitative study was to evaluate the positive and negative aspects of the interactive whiteboard (IWB) and tablet computers use in the first grade of primary school from the perspectives of three groups of evaluators, namely the teachers, the pupils and an independent observer. The sample included three first grade classes with…

  8. Inductively coupled plasma-atomic emission spectroscopy: a computer controlled, scanning monochromator system for the rapid determination of the elements

    SciTech Connect

    Floyd, M.A.

    1980-03-01

    A computer controlled, scanning monochromator system specifically designed for the rapid, sequential determination of the elements is described. The monochromator is combined with an inductively coupled plasma excitation source so that elements at major, minor, trace, and ultratrace levels may be determined, in sequence, without changing experimental parameters other than the spectral line observed. A number of distinctive features not found in previously described versions are incorporated into the system here described. Performance characteristics of the entire system and several analytical applications are discussed.

  9. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 1, Theoretical background

    SciTech Connect

    Gartling, D.K.

    1996-05-01

    The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.

  10. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow

  11. Virtual garden computer program for use in exploring the elements of biodiversity people want in cities.

    PubMed

    Shwartz, Assaf; Cheval, Helene; Simon, Laurent; Julliard, Romain

    2013-08-01

    Urban ecology is emerging as an integrative science that explores the interactions of people and biodiversity in cities. Interdisciplinary research requires the creation of new tools that allow the investigation of relations between people and biodiversity. It has been established that access to green spaces or nature benefits city dwellers, but the role of species diversity in providing psychological benefits remains poorly studied. We developed a user-friendly 3-dimensional computer program (Virtual Garden [www.tinyurl.com/3DVirtualGarden]) that allows people to design their own public or private green spaces with 95 biotic and abiotic features. Virtual Garden allows researchers to explore what elements of biodiversity people would like to have in their nearby green spaces while accounting for other functions that people value in urban green spaces. In 2011, 732 participants used our Virtual Garden program to design their ideal small public garden. On average gardens contained 5 different animals, 8 flowers, and 5 woody plant species. Although the mathematical distribution of flower and woody plant richness (i.e., number of species per garden) appeared to be similar to what would be expected by random selection of features, 30% of participants did not place any animal species in their gardens. Among those who placed animals in their gardens, 94% selected colorful species (e.g., ladybug [Coccinella septempunctata], Great Tit [Parus major], and goldfish), 53% selected herptiles or large mammals, and 67% selected non-native species. Older participants with a higher level of education and participants with a greater concern for nature designed gardens with relatively higher species richness and more native species. If cities are to be planned for the mutual benefit of people and biodiversity and to provide people meaningful experiences with urban nature, it is important to investigate people's relations with biodiversity further. Virtual Garden offers a standardized

  12. COYOTE II: A Finite Element Computer Program for nonlinear heat conduction problems. Part 2, User`s manual

    SciTech Connect

    Gartling, D.K.; Hogan, R.E.

    1994-10-01

    User instructions are given for the finite element computer program, COYOTE II. COYOTE II is designed for the multi-dimensional analysis of nonlinear heat conduction problems including the effects of enclosure radiation and chemical reaction. The theoretical background and numerical methods used in the program are documented in SAND94-1173. Examples of the use of the code are presented in SAND94-1180.

  13. Computational fluid dynamics analysis of SSME phase 2 and phase 2+ preburner injector element hydrogen flow paths

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    1992-01-01

    Phase 2+ Space Shuttle Main Engine powerheads, E0209 and E0215 degraded their main combustion chamber (MCC) liners at a faster rate than is normal for phase 2 powerheads. One possible cause of the accelerated degradation was a reduction of coolant flow through the MCC. Hardware changes were made to the preburner fuel leg which may have reduced the resistance and, therefore, pulled some of the hydrogen from the MCC coolant leg. A computational fluid dynamics (CFD) analysis was performed to determine hydrogen flow path resistances of the phase 2+ fuel preburner injector elements relative to the phase 2 element. FDNS was implemented on axisymmetric grids with the hydrogen assumed to be incompressible. The analysis was performed in two steps: the first isolated the effect of the different inlet areas and the second modeled the entire injector element hydrogen flow path.

  14. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.

  15. COMGEN: A computer program for generating finite element models of composite materials at the micro level

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.

    1990-01-01

    COMGEN (Composite Model Generator) is an interactive FORTRAN program which can be used to create a wide variety of finite element models of continuous fiber composite materials at the micro level. It quickly generates batch or session files to be submitted to the finite element pre- and postprocessor PATRAN based on a few simple user inputs such as fiber diameter and percent fiber volume fraction of the composite to be analyzed. In addition, various mesh densities, boundary conditions, and loads can be assigned easily to the models within COMGEN. PATRAN uses a session file to generate finite element models and their associated loads which can then be translated to virtually any finite element analysis code such as NASTRAN or MARC.

  16. CUERVO: A finite element computer program for nonlinear scalar transport problems

    SciTech Connect

    Sirman, M.B.; Gartling, D.K.

    1995-11-01

    CUERVO is a finite element code that is designed for the solution of multi-dimensional field problems described by a general nonlinear, advection-diffusion equation. The code is also applicable to field problems described by diffusion, Poisson or Laplace equations. The finite element formulation and the associated numerical methods used in CUERVO are outlined here; detailed instructions for use of the code are also presented. Example problems are provided to illustrate the use of the code.

  17. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology

  18. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology

  19. Microscopy and elemental analysis in tissue samples using computed microtomography with synchrotron x-rays

    SciTech Connect

    Spanne, P.; Rivers, M.L.

    1988-01-01

    The initial development shows that CMT using synchrotron x-rays can be developed to ..mu..m spatial resolution and perhaps even better. This creates a new microscopy technique which is of special interest in morphological studies of tissues, since no chemical preparation or slicing of the sample is necessary. The combination of CMT with spatial resolution in the ..mu..m range and elemental mapping with sensitivity in the ppM range results in a new tool for elemental mapping at the cellular level. 7 refs., 1 fig.

  20. On the utility of finite element theory for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Soliman, M. O.

    1981-01-01

    An implicit finite element numerical solution algorithm is derived for the compressible Navier-Stokes equations expressed in generalized coordinates. The theoretical basis utilizes a Galerkin-Weighted Residuals formulation, and extremization of approximation error within the context of a multipole expansion. A von Neumann analysis for a simplified form indicates the algorithm fourth- to sixth-order phase accurate, with third-order dissipation for the elementary linear element construction. Performance is improved for the algorithm constructed using quadratic interpolation. Numerical experiments for shocked duct flows are employed to optimize the several algorithm parameters. Additional numerical solutions validate algorithm accuracy and utility for aerodynamics applications.

  1. Isoparametric 3-D Finite Element Mesh Generation Using Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kayrak, C.; Ozsoy, T.

    1985-01-01

    An isoparametric 3-D finite element mesh generator was developed with direct interface to an interactive geometric modeler program called POLYGON. POLYGON defines the model geometry in terms of boundaries and mesh regions for the mesh generator. The mesh generator controls the mesh flow through the 2-dimensional spans of regions by using the topological data and defines the connectivity between regions. The program is menu driven and the user has a control of element density and biasing through the spans and can also apply boundary conditions, loads interactively.

  2. Genome-wide computational prediction and analysis of core promoter elements across plant monocots and dicots

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...

  3. Computations of M sub 2 and K sub 1 ocean tidal perturbations in satellite elements

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1974-01-01

    Semi-analytic perturbation equations for the influence of M2 and K1 ocean tidal constituents on satellite motion are expanded into multi-dimensional Fourier series and calculations made for the BE-C satellite. Perturbation in the orbital elements are compared to those of the long period solid earth tides.

  4. TEnest 2.0: Computational annotation and visualization of nested transposable elements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Grass genomes are highly repetitive, for example, Oryza sativa (rice) contains 35% repeat sequences, Zea mays (maize) comprise 75%, and Triticum aestivum (wheat) includes approximately 80%. Most of these repeats occur as abundant transposable elements (TE), which present unique challenges to sequen...

  5. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    EPA Science Inventory

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  6. Computation of strain energy release rates for skin-stiffener debonds modeled with plate elements

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Raju, I. S.; Davila, C. G.; Sleight, D. W.

    1993-01-01

    An efficient method for predicting the strength of debonded composite skin-stiffener configurations is presented. This method, which is based on fracture mechanics, models the skin and the stiffener with two-dimensional (2D) plate elements instead of three-dimensional (3D) solid elements. The skin and stiffener flange nodes are tied together by two modeling techniques. In one technique, the corresponding flange and skin nodes are required to have identical translational and rotational degrees-of-freedom. In the other technique, the corresponding flange and skin nodes are only required to have identical translational degrees-of-freedom. Strain energy release rate formulas are proposed for both modeling techniques. These formulas are used for skin-stiffener debond cases with and without cylindrical bending deformations. The cylindrical bending results are compared with plane-strain finite element results. Excellent agreement between the two sets of results is obtained when the second technique is used. Thus, from these limited studies, a preferable modeling technique for skin-stiffener debond analysis using plate elements is established.

  7. MPSalsa Version 1.5: A Finite Element Computer Program for Reacting Flow Problems: Part 1 - Theoretical Development

    SciTech Connect

    Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.; Moffat, H.K.; Salinger, A.G.; Schmidt, R.C.; Shadid, J.N.; Smith, T.M.

    1999-01-01

    The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.

  8. Computation of Dancoff Factors for Fuel Elements Incorporating Randomly Packed TRISO Particles

    SciTech Connect

    J. L. Kloosterman; Abderrafi M. Ougouag

    2005-01-01

    A new method for estimating the Dancoff factors in pebble beds has been developed and implemented within two computer codes. The first of these codes, INTRAPEB, is used to compute Dancoff factors for individual pebbles taking into account the random packing of TRISO particles within the fuel zone of the pebble and explicitly accounting for the finite geometry of the fuel kernels. The second code, PEBDAN, is used to compute the pebble-to-pebble contribution to the overall Dancoff factor. The latter code also accounts for the finite size of the reactor vessel and for the proximity of reflectors, as well as for fluctuations in the pebble packing density that naturally arises in pebble beds.

  9. Computer synthesized filament images from reflectors and through lens elements for lamp design and evaluation.

    PubMed

    Donohue, R J; Joseph, B W

    1975-10-01

    A mathematical model of the cylindrical, helical filament used in automotive forward and signal lighting can be easily written in computer language. By ray tracing the filament through reflections off surfaces, accurate pinhole images can be synthesized on a computer screen. The buildup of images from this basic program can simulate light patterns from reflector-plus-lens headlamps and signal lamps, as well as aid in the design of faceted-reflector lamps in which all the pattern controlling optics are in the reflector. PMID:20155029

  10. Efficient Inverse Isoparametric Mapping Algorithm for Whole-Body Computed Tomography Registration Using Deformations Predicted by Nonlinear Finite Element Modeling

    PubMed Central

    Li, Mao; Wittek, Adam; Miller, Karol

    2014-01-01

    Biomechanical modeling methods can be used to predict deformations for medical image registration and particularly, they are very effective for whole-body computed tomography (CT) image registration because differences between the source and target images caused by complex articulated motions and soft tissues deformations are very large. The biomechanics-based image registration method needs to deform the source images using the deformation field predicted by finite element models (FEMs). In practice, the global and local coordinate systems are used in finite element analysis. This involves the transformation of coordinates from the global coordinate system to the local coordinate system when calculating the global coordinates of image voxels for warping images. In this paper, we present an efficient numerical inverse isoparametric mapping algorithm to calculate the local coordinates of arbitrary points within the eight-noded hexahedral finite element. Verification of the algorithm for a nonparallelepiped hexahedral element confirms its accuracy, fast convergence, and efficiency. The algorithm's application in warping of the whole-body CT using the deformation field predicted by means of a biomechanical FEM confirms its reliability in the context of whole-body CT registration. PMID:24828796

  11. Computation of the transient flow in zoned anisotropic porous media by the boundary element method

    NASA Astrophysics Data System (ADS)

    Bruch, E.; Grilli, S.

    Results on the application of the BEM to transient two-dimensional flows in zoned anisotropic porous media are presented, including the iterative calculation of the free surface seepage position. The classical BEM equations are discretized by linear, quadratic, or cubic elements, employing special singular numerical quadrature rules. The method is improved by the incorporation of a subregion division. The present technique is shown to be very accurate and to avoid previously encountered oscillation problems.

  12. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    NASA Astrophysics Data System (ADS)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  13. Three-Dimensional Effects on Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Watson, Ralph D.

    2002-01-01

    In an effort to discover the causes for disagreement between previous 2-D computations and nominally 2-D experiment for flow over the 3-clement McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, document's venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using 3-D structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects of the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of all off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too earl or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower die the levels near maximum lift conditions.

  14. Improved Discontinuity-capturing Finite Element Techniques for Reaction Effects in Turbulence Computation

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Santoriello, A.; Tezduyar, T. E.

    2006-09-01

    Recent advances in turbulence modeling brought more and more sophisticated turbulence closures (e.g. k-ɛ, k-ɛ - v 2- f, Second Moment Closures), where the governing equations for the model parameters involve advection, diffusion and reaction terms. Numerical instabilities can be generated by the dominant advection or reaction terms. Classical stabilized formulations such as the Streamline Upwind/Petrov Galerkin (SUPG) formulation (Brook and Hughes, comput methods Appl Mech Eng 32:199 255, 1982; Hughes and Tezduyar, comput methods Appl Mech Eng 45: 217 284, 1984) are very well suited for preventing the numerical instabilities generated by the dominant advection terms. A different stabilization however is needed for instabilities due to the dominant reaction terms. An additional stabilization term, called the diffusion for reaction-dominated (DRD) term, was introduced by Tezduyar and Park (comput methods Appl Mech Eng 59:307 325, 1986) for that purpose and improves the SUPG performance. In recent years a new class of variational multi-scale (VMS) stabilization (Hughes, comput methods Appl Mech Eng 127:387 401, 1995) has been introduced, and this approach, in principle, can deal with advection diffusion reaction equations. However, it was pointed out in Hanke (comput methods Appl Mech Eng 191:2925 2947) that this class of methods also need some improvement in the presence of high reaction rates. In this work we show the benefits of using the DRD operator to enhance the core stabilization techniques such as the SUPG and VMS formulations. We also propose a new operator called the DRDJ (DRD with the local variation jump) term, targeting the reduction of numerical oscillations in the presence of both high reaction rates and sharp solution gradients. The methods are evaluated in the context of two stabilized methods: the classical SUPG formulation and a recently-developed VMS formulation called the V-SGS (Corsini et al. comput methods Appl Mech Eng 194:4797 4823, 2005

  15. Three Aspects of PLATO Use at Chanute AFB: CBE Production Techniques, Computer-Aided Management, Formative Development of CBE Lessons.

    ERIC Educational Resources Information Center

    Klecka, Joseph A.

    This report describes various aspects of lesson production and use of the PLATO system at Chanute Air Force Base. The first chapter considers four major factors influencing lesson production: (1) implementation of the "lean approach," (2) the Instructional Systems Development (ISD) role in lesson production, (3) the transfer of programmed…

  16. A Monte-Carlo based extension of the Meteor Orbit and Trajectory Software (MOTS) for computations of orbital elements

    NASA Astrophysics Data System (ADS)

    Albin, T.; Koschny, D.; Soja, R.; Srama, R.; Poppe, B.

    2016-01-01

    The Canary Islands Long-Baseline Observatory (CILBO) is a double station meteor camera system (Koschny et al., 2013; Koschny et al., 2014) that consists of 5 cameras. The two cameras considered in this report are ICC7 and ICC9, and are installed on Tenerife and La Palma. They point to the same atmospheric volume between both islands allowing stereoscopic observation of meteors. Since its installation in 2011 and the start of operation in 2012 CILBO has detected over 15000 simultaneously observed meteors. Koschny and Diaz (2002) developed the Meteor Orbit and Trajectory Software (MOTS) to compute the trajectory of such meteors. The software uses the astrometric data from the detection software MetRec (Molau, 1998) and determines the trajectory in geodetic coordinates. This work presents a Monte-Carlo based extension of the MOTS code to compute the orbital elements of simultaneously detected meteors by CILBO.

  17. Finite element analysis of induction motors based on computing detailed equivalent circuit parameters

    SciTech Connect

    Zhou, P.; Gilmore, J.; Badics, Z.; Cendes, Z.J.

    1998-09-01

    A method for accurately predicting the steady-state performance of squirrel cage induction motors is presented. The approach is based on the use of complex two-dimensional finite element solutions to deduce per-phase equivalent circuit parameters for any operating condition. Core saturation and skin effect are directly considered in the field calculation. Corrections can be introduced to include three-dimensional effects such as end-winding and rotor skew. An application example is provided to demonstrate the effectiveness of the proposed approach.

  18. Computational Approaches to Identify Promoters and cis-Regulatory Elements in Plant Genomes1

    PubMed Central

    Rombauts, Stephane; Florquin, Kobe; Lescot, Magali; Marchal, Kathleen; Rouzé, Pierre; Van de Peer, Yves

    2003-01-01

    The identification of promoters and their regulatory elements is one of the major challenges in bioinformatics and integrates comparative, structural, and functional genomics. Many different approaches have been developed to detect conserved motifs in a set of genes that are either coregulated or orthologous. However, although recent approaches seem promising, in general, unambiguous identification of regulatory elements is not straightforward. The delineation of promoters is even harder, due to its complex nature, and in silico promoter prediction is still in its infancy. Here, we review the different approaches that have been developed for identifying promoters and their regulatory elements. We discuss the detection of cis-acting regulatory elements using word-counting or probabilistic methods (so-called “search by signal” methods) and the delineation of promoters by considering both sequence content and structural features (“search by content” methods). As an example of search by content, we explored in greater detail the association of promoters with CpG islands. However, due to differences in sequence content, the parameters used to detect CpG islands in humans and other vertebrates cannot be used for plants. Therefore, a preliminary attempt was made to define parameters that could possibly define CpG and CpNpG islands in Arabidopsis, by exploring the compositional landscape around the transcriptional start site. To this end, a data set of more than 5,000 gene sequences was built, including the promoter region, the 5′-untranslated region, and the first introns and coding exons. Preliminary analysis shows that promoter location based on the detection of potential CpG/CpNpG islands in the Arabidopsis genome is not straightforward. Nevertheless, because the landscape of CpG/CpNpG islands differs considerably between promoters and introns on the one side and exons (whether coding or not) on the other, more sophisticated approaches can probably be

  19. Non-uniform FFT for the finite element computation of the micromagnetic scalar potential

    NASA Astrophysics Data System (ADS)

    Exl, L.; Schrefl, T.

    2014-08-01

    We present a quasi-linearly scaling, first order polynomial finite element method for the solution of the magnetostatic open boundary problem by splitting the magnetic scalar potential. The potential is determined by solving a Dirichlet problem and evaluation of the single layer potential by a fast approximation technique based on Fourier approximation of the kernel function. The latter approximation leads to a generalization of the well-known convolution theorem used in finite difference methods. We address it by a non-uniform FFT approach. Overall, our method scales O(M+N+Nlog N) for N nodes and M surface triangles. We confirm our approach by several numerical tests.

  20. A geometrically-conservative, synchronized, flux-corrected remap for arbitrary Lagrangian-Eulerian computations with nodal finite elements

    NASA Astrophysics Data System (ADS)

    López Ortega, A.; Scovazzi, G.

    2011-07-01

    This article describes a conservative synchronized remap algorithm applicable to arbitrary Lagrangian-Eulerian computations with nodal finite elements. In the proposed approach, ideas derived from flux-corrected transport (FCT) methods are extended to conservative remap. Unique to the proposed method is the direct incorporation of the geometric conservation law (GCL) in the resulting numerical scheme. It is shown here that the geometric conservation law allows the method to inherit the positivity preserving and local extrema diminishing (LED) properties typical of FCT schemes. The proposed framework is extended to the systems of equations that typically arise in meteorological and compressible flow computations. The proposed algorithm remaps the vector fields associated with these problems by means of a synchronized strategy. The present paper also complements and extends the work of the second author on nodal-based methods for shock hydrodynamics, delivering a fully integrated suite of Lagrangian/remap algorithms for computations of compressible materials under extreme load conditions. Extensive testing in one, two, and three dimensions shows that the method is robust and accurate under typical computational scenarios.

  1. New perspectives for Discrete Element Modeling: Merging Computational Geometry and Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Alonso-Marroquín, Fernando; Galindo-Torres, Sergio-Andres; Tordesillas, Antoinette; Wang, Yucang

    2009-06-01

    One of the most challenging problems in the realistic modeling of granular materials is how to capture the real shape of the particles. Here we present a method to simulate systems with complex-shaped particles. This method integrates developments in two traditionally separate research areas: computational geometry and molecular dynamics. The computational geometry involves the implementation of techniques of computer graphics to represent particle shape and collision detection. Traditional techniques from molecular dynamics are used to integrate the equations of motion and to perform an efficient calculation of contact forces. The algorithm to solve the dynamics of the system is much more efficient, accurate and easier to implement than other models. The algorithm is used to simulate quasistatic deformation of granular materials using two different models. The first model consists of non-circular particles interacting via frictional forces. The second model consists of circular particles interacting via rolling and sliding resistance. The comparison of both models help us to understand and quantify the extend to which the effects of particle shape can be captured by the introduction of artificial rolling resistance on circular particles. Biaxial test simulation show that the overall response of the system and the collapse of force chains at the critical state is qualitatively similar in both 2D and 3D simulations.

  2. Massively parallel multifrontal methods for finite element analysis on MIMD computer systems

    SciTech Connect

    Benner, R.E.

    1993-03-01

    The development of highly parallel direct solvers for large, sparse linear systems of equations (e.g. for finite element or finite difference models) is lagging behind progress in parallel direct solvers for dense matrices and iterative methods for sparse matrices. We describe a massively parallel (MP) multifrontal solver for the direct solution of large sparse linear systems, such as those routinely encountered in finite element structural analysis, in an effort to address concerns about the viability of scalable, MP direct methods for sparse systems and enhance the software base for MP applications. Performance results are presented and future directions are outlined for research and development efforts in parallel multifrontal and related solvers. In particular, parallel efficiencies of 25% on 1024 nCUBE 2 nodes and 36% on 64 Intel iPSCS60 nodes have been demonstrated, and parallel efficiencies of 60--85% are expected when a severe load imbalance is overcome by static mapping and dynamic load balance techniques previously developed for other parallel solvers and application codes.

  3. Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements

    NASA Astrophysics Data System (ADS)

    Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.

    2000-11-01

    In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.

  4. A new finite element and finite difference hybrid method for computing electrostatics of ionic solvated biomolecule

    NASA Astrophysics Data System (ADS)

    Ying, Jinyong; Xie, Dexuan

    2015-10-01

    The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model for calculating electrostatics of ionic solvated biomolecule. In this paper, a new finite element and finite difference hybrid method is presented to solve PBE efficiently based on a special seven-overlapped box partition with one central box containing the solute region and surrounded by six neighboring boxes. In particular, an efficient finite element solver is applied to the central box while a fast preconditioned conjugate gradient method using a multigrid V-cycle preconditioning is constructed for solving a system of finite difference equations defined on a uniform mesh of each neighboring box. Moreover, the PBE domain, the box partition, and an interface fitted tetrahedral mesh of the central box can be generated adaptively for a given PQR file of a biomolecule. This new hybrid PBE solver is programmed in C, Fortran, and Python as a software tool for predicting electrostatics of a biomolecule in a symmetric 1:1 ionic solvent. Numerical results on two test models with analytical solutions and 12 proteins validate this new software tool, and demonstrate its high performance in terms of CPU time and memory usage.

  5. The biomechanical aspects of reconstruction for segmental defects of the mandible: a finite element study to assess the optimisation of plate and screw factors.

    PubMed

    Bujtár, Péter; Simonovics, János; Váradi, Károly; Sándor, George K B; Avery, C M E

    2014-09-01

    A bone plate is required to restore the load-bearing capacity of the mandible following a segmental resection. A good understanding of the underlying principles is crucial for developing a reliable reconstruction. A finite element analysis (FEA) technique has been developed to study the biomechanics of the clinical scenarios managed after surgical resection of a tumour or severe trauma to assist in choosing the optimal hardware elements. A computer aided design (CAD) model of an edentulous human mandible was created. Then 4 common segmental defects were simulated. A single reconstruction plate was designed to span the defects. The hardware variations studied were: monocortical or bicortical screw fixation and non-locking or locking plate design. A standardized load was applied to mimic the human bite. The von Mises stress and strain, spatial changes at the screw-bone interfaces were analysed. In general, the locking plate and monocortical screw fixation systems were most effective. Non-locking plating systems produced larger screw "pull-out" displacements, especially at the hemimandible (up to 5% strain). Three screws on either side of the defect were adequate for all scenarios except extensive unilateral defects when additional screws and an increased screw diameter are recommended. The simplification of screw geometry may underestimate stress levels and factors such as poor adaptation of the plate or reduced bone quality are likely to be indications for bicortical locking screw fixation. The current model provides a good basis for understanding the complex biomechanics and developing future refinements in plate or scaffold design. PMID:24467871

  6. U.S. Department of Energy Office of Inspector General report on audit of selected aspects of the unclassified computer security program at a DOE headquarters computing facility

    SciTech Connect

    1995-07-31

    The purpose of this audit was to evaluate the effectiveness of the unclassified computer security program at the Germantown Headquarters Administrative Computer Center (Center). The Department of Energy (DOE) relies on the application systems at the Germantown Headquarters Administrative Computer Center to support its financial, payroll and personnel, security, and procurement functions. The review was limited to an evaluation of the administrative, technical, and physical safeguards governing utilization of the unclassified computer system which hosts many of the Department`s major application systems. The audit identified weaknesses in the Center`s computer security program that increased the risk of unauthorized disclosure or loss of sensitive data. Specifically, the authors found that (1) access to sensitive data was not limited to individuals who had a need for the information, and (2) accurate and complete information was not maintained on the inventory of tapes at the Center. Furthermore, the risk of unauthorized disclosure and loss of sensitive data was increased because other controls, such as physical security, had not been adequately implemented at the Center. Management generally agreed with the audit conclusions and recommendations, and initiated a number of actions to improve computer security at the Center.

  7. Computer-originated polarizing holographic optical element recorded in photopolymerizable layers.

    PubMed

    Carré, C; Habraken, S; Roose, S

    1993-05-01

    The photosensitive system that is used in most cases to produce holographic optical holograms is dichromated gelatin. Other materials may be used, in particular, photopolymerizable layers. In the present investigation, we set out to use the polymer developed in the Laboratoire de Photochimie Générale in Mulhouse in order to duplicate a computer-generated hologram. Our technique is intended to generate polarizing properties. We took into account the fact that no wet chemistry processing is required; grating fringe spacings are not distorted through chemical development. PMID:19802257

  8. Computational flow simulation of liquid oxygen in a SSME preburner injector element LOX post

    NASA Technical Reports Server (NTRS)

    Rocker, Marvin

    1990-01-01

    Liquid oxygen (LOX) is simulated as an incompressible flow through a Space Shuttle main engine fuel preburner injector element LOX post for the full range of operating conditions. Axial profiles of axial velocity and static pressure are presented. For each operating condition analyzed, the minimum pressure downstream of the orifice is compared to the vapor pressure to determine if cavitation could occur. Flow visualization is provided by velocity vectors and stream function contours. The results indicate that the minimum pressure is too high for cavitation to occur. To establish confidence in the CFD analysis, the simulation is repeated with water flow through a superscaled LOX post and compared with experimental results. The agreement between calculated and experimental results is very good.

  9. Sensitivity Analysis of Stability Problems of Steel Structures using Shell Finite Elements and Nonlinear Computation Methods

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk; Kala, Jiří

    2011-09-01

    The main focus of the paper is the analysis of the influence of residual stress on the ultimate limit state of a hot-rolled member in compression. The member was modelled using thin-walled elements of type SHELL 181 and meshed in the programme ANSYS. Geometrical and material non-linear analysis was used. The influence of residual stress was studied using variance-based sensitivity analysis. In order to obtain more general results, the non-dimensional slenderness was selected as a study parameter. Comparison of the influence of the residual stress with the influence of other dominant imperfections is illustrated in the conclusion of the paper. All input random variables were considered according to results of experimental research.

  10. A DRD finite element formulation for computing turbulent reacting flows in gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Iossa, C.; Rispoli, F.; Tezduyar, T. E.

    2009-11-01

    An effective multiscale treatment of turbulent reacting flows is presented with the use of a stabilized finite element formulation. The method proposed is developed based on the streamline-upwind/Petrov-Galerkin (SUPG) formulation, and includes discontinuity capturing in the form of a new generation “DRD” method, namely the “DRDJ” technique. The stabilized formulation is applied to finite-rate chemistry modelling based on mixture-fraction approaches with the so-called presumed-PDF technique. The turbulent combustion process is simulated for an aero-engine combustor configuration of RQL concept in non-premixed flame regime. The comparative analysis of the temperature and velocity fields demonstrate that the proposed SUPG+DRDJ formulation outperforms the stand-alone SUPG method. The improved accuracy is demonstrated in terms of the combustor overall performance, and the mechanisms involved in the distribution of the numerical diffusivity are also discussed.

  11. Evaluation of accuracy of non-linear finite element computations for surgical simulation: study using brain phantom.

    PubMed

    Ma, J; Wittek, A; Singh, S; Joldes, G; Washio, T; Chinzei, K; Miller, K

    2010-12-01

    In this paper, the accuracy of non-linear finite element computations in application to surgical simulation was evaluated by comparing the experiment and modelling of indentation of the human brain phantom. The evaluation was realised by comparing forces acting on the indenter and the deformation of the brain phantom. The deformation of the brain phantom was measured by tracking 3D motions of X-ray opaque markers, placed within the brain phantom using a custom-built bi-plane X-ray image intensifier system. The model was implemented using the ABAQUS(TM) finite element solver. Realistic geometry obtained from magnetic resonance images and specific constitutive properties determined through compression tests were used in the model. The model accurately predicted the indentation force-displacement relations and marker displacements. Good agreement between modelling and experimental results verifies the reliability of the finite element modelling techniques used in this study and confirms the predictive power of these techniques in surgical simulation. PMID:21153973

  12. NASTRAN data generation of helicopter fuselages using interactive graphics. [preprocessor system for finite element analysis using IBM computer

    NASA Technical Reports Server (NTRS)

    Sainsbury-Carter, J. B.; Conaway, J. H.

    1973-01-01

    The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.

  13. A Frequency Count of Music Elements in Bahian Folk Songs Using Computer and Hand Analysis: Suggestions for Applications in Music Education.

    ERIC Educational Resources Information Center

    Oliveira, Alda De Jesus

    1997-01-01

    Explores the frequency of selected musical elements in a sample of folk songs from Bahia Brazil using a computer program and manual analysis. Demonstrates that the contents of each beat are comprised of simple rhythmic elements, melodic ranges are within an octave, and most formal structures of the songs consist of four phrases. (CMK)

  14. An Objective Evaluation of Mass Scaling Techniques Utilizing Computational Human Body Finite Element Models.

    PubMed

    Davis, Matthew L; Scott Gayzik, F

    2016-10-01

    Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios. PMID:27457051

  15. Computational efficiency of numerical approximations of tangent moduli for finite element implementation of a fiber-reinforced hyperelastic material model.

    PubMed

    Liu, Haofei; Sun, Wei

    2016-01-01

    In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models. PMID:26692168

  16. Texture as a visual cueing element in computer image generation. I. Representation of the sea surface

    SciTech Connect

    Bookout, G.; Sinacori, J.

    1993-01-01

    The objective of this paper is to advance hypotheses about texture as a visual cueing medium in simulation and to provide guidelines for data base modelers in the use of computer image generator resources to provide effective visual cues for simulation purposes. The emphasis is on a texture decoration of the earth's surface data base in order to support low-level flight, i.e., flight at elevations above the surface of 500 feet or less. The appearance of the surface of the sea is the focus of this paper. The physics of the sea's appearance are discussed and guidelines are given for its representation for sea states from 0 (calm) to 5 (fresh breeze of 17-21 knots and sixfoot waves, peak-to-trough). The viewpoints considered vary from 500 feet above the mean sea surface to an altitude just above the wave crests. 7 refs.

  17. The effects of computer game elements in physics instruction software for middle schools: A study of cognitive and affective gains

    NASA Astrophysics Data System (ADS)

    Vasquez, David Alan

    Can the educational effectiveness of physics instruction software for middle schoolers be improved by employing "game elements" commonly found in recreational computer games? This study utilized a selected set of game elements to contextualize and embellish physics word problems with the aim of making such problems more engaging. Game elements used included: (1) a fantasy-story context with developed characters; and (2) high-end graphics and visual effects. The primary purpose of the study was to find out if the added production cost of using such game elements was justified by proportionate gains in physics learning. The theoretical framework for the study was a modified version of Lepper and Malone's "intrinsically-motivating game elements" model. A key design issue in this model is the concept of "endogeneity", or the degree to which the game elements used in educational software are integrated with its learning content. Two competing courseware treatments were custom-designed and produced for the study; both dealt with Newton's first law. The first treatment (T1) was a 45 minute interactive tutorial that featured cartoon characters, color animations, hypertext, audio narration, and realistic motion simulations using the Interactive PhysicsspTM software. The second treatment (T2) was similar to the first except for the addition of approximately three minutes of cinema-like sequences where characters, game objectives, and a science-fiction story premise were described and portrayed with high-end graphics and visual effects. The sample of 47 middle school students was evenly divided between eighth and ninth graders and between boys and girls. Using a pretest/posttest experimental design, the independent variables for the study were: (1) two levels of treatment; (2) gender; and (3) two schools. The dependent variables were scores on a written posttest for both: (1) physics learning, and (2) attitude toward physics learning. Findings indicated that, although

  18. Computer literacy and attitudes among students in 16 European dental schools: current aspects, regional differences and future trends.

    PubMed

    Mattheos, N; Nattestad, A; Schittek, M; Attström, R

    2002-02-01

    A questionnaire survey was carried out to investigate the competence and attitude of dental students towards computers. The current study presents the findings deriving from 590 questionnaires collected from 16 European dental schools from 9 countries between October 1998 and October 1999. The results suggest that 60% of students use computers for their education, while 72% have access to the Internet. The overall figures, however, disguise major differences between the various universities. Students in Northern and Western Europe seem to rely mostly on university facilities to access the Internet. The same however, is not true for students in Greece and Spain, who appear to depend on home computers. Less than half the students have been exposed to some form of computer literacy education in their universities, with the great majority acquiring their competence in other ways. The Information and Communication Technology (ICT) skills of the average dental student, within this limited sample of dental schools, do not facilitate full use of new media available. In addition, if the observed regional differences are valid, there may be an educational and political problem that could intensify inequalities among professionals in the future. To minimize this potential problem, closer cooperation between academic institutions, with sharing of resources and expertise, is recommended. PMID:11872071

  19. Human factors in the presentation of computer-generated information - Aspects of design and application in automated flight traffic

    NASA Technical Reports Server (NTRS)

    Roske-Hofstrand, Renate J.

    1990-01-01

    The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.

  20. Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.

  1. Structure and micro-computed tomography-based finite element modeling of Toucan beak.

    PubMed

    Seki, Yasuaki; Mackey, Mason; Meyers, Marc A

    2012-05-01

    Bird beaks are one of the most fascinating sandwich composites in nature. Their design is composed of a keratinous integument and a bony foam core. We evaluated the structure and mechanical properties of a Toucan beak to establish structure-property relationships. We revealed the hierarchical structure of the Toucan beak by microscopy techniques. The integument consists of 50 μm polygonal keratin tiles with ~7.5 nm embedded intermediate filaments. The branched intermediate filaments were visualized by TEM tomography techniques. The bony foam core or trabecular bone is a closed-cell foam, which serves as a stiffener for the beak. The tridimensional foam structure was reconstructed by μ-CT scanning to create a model for the finite element analysis (FEA). The mechanical response of the beak foam including trabeculae and cortical shell was measured in tension and compression. We found that Young's modulus is 3 (S.D. 2.2) GPa for the trabeculae and 0.3 (S.D. 0.2) GPa for the cortical shell. After obtaining the material parameters, the deformation and microscopic failure of foam were calculated by FEA. The calculations agree well with the experimental results. PMID:22498278

  2. Development of a Computationally Efficient, High Fidelity, Finite Element Based Hall Thruster Model

    NASA Technical Reports Server (NTRS)

    Jacobson, David (Technical Monitor); Roy, Subrata

    2004-01-01

    This report documents the development of a two dimensional finite element based numerical model for efficient characterization of the Hall thruster plasma dynamics in the framework of multi-fluid model. Effect of the ionization and the recombination has been included in the present model. Based on the experimental data, a third order polynomial in electron temperature is used to calculate the ionization rate. The neutral dynamics is included only through the neutral continuity equation in the presence of a uniform neutral flow. The electrons are modeled as magnetized and hot, whereas ions are assumed magnetized and cold. The dynamics of Hall thruster is also investigated in the presence of plasma-wall interaction. The plasma-wall interaction is a function of wall potential, which in turn is determined by the secondary electron emission and sputtering yield. The effect of secondary electron emission and sputter yield has been considered simultaneously, Simulation results are interpreted in the light of experimental observations and available numerical solutions in the literature.

  3. Computational Study of Laminar Flow Control on a Subsonic Swept Wing Using Discrete Roughness Elements

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; Streett, Craig L.; Carpenter, Mark H.

    2011-01-01

    A combination of parabolized stability equations and secondary instability theory has been applied to a low-speed swept airfoil model with a chord Reynolds number of 7.15 million, with the goals of (i) evaluating this methodology in the context of transition prediction for a known configuration for which roughness based crossflow transition control has been demonstrated under flight conditions and (ii) of analyzing the mechanism of transition delay via the introduction of discrete roughness elements (DRE). Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes, so as to weaken the growth of naturally occurring, linearly more unstable crossflow modes. Therefore, a synthesis of receptivity, linear and nonlinear growth of stationary crossflow disturbances, and the ensuing development of high frequency secondary instabilities is desirable to understand the experimentally observed transition behavior. With further validation, such higher fidelity prediction methodology could be utilized to assess the potential for crossflow transition control at even higher Reynolds numbers, where experimental data is currently unavailable.

  4. Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2016-01-01

    Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.

  5. Computationally-efficient finite-element-based thermal and electromagnetic models of electric machines

    NASA Astrophysics Data System (ADS)

    Zhou, Kan

    With the modern trend of transportation electrification, electric machines are a key component of electric/hybrid electric vehicle (EV/HEV) powertrains. It is therefore important that vehicle powertrain-level and system-level designers and control engineers have access to accurate yet computationally-efficient (CE), physics-based modeling tools of the thermal and electromagnetic (EM) behavior of electric machines. In this dissertation, CE yet sufficiently-accurate thermal and EM models for electric machines, which are suitable for use in vehicle powertrain design, optimization, and control, are developed. This includes not only creating fast and accurate thermal and EM models for specific machine designs, but also the ability to quickly generate and determine the performance of new machine designs through the application of scaling techniques to existing designs. With the developed techniques, the thermal and EM performance can be accurately and efficiently estimated. Furthermore, powertrain or system designers can easily and quickly adjust the characteristics and the performance of the machine in ways that are favorable to the overall vehicle performance.

  6. An assessment of the performance of the Spanwise Iron Magnet rolling moment generating system for magnetic suspension and balance systems using the finite element computer program GFUN

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1982-01-01

    The development of a powerful method of magnetic roll torque generation is essential before construction of a large magnetic suspension and balance system (LMSBS) can be undertaken. Some preliminary computed data concerning a relatively new dc scheme, referred to as the spanwise iron magnet scheme are presented. Computations made using the finite element computer program 'GFUN' indicate that adequate torque is available for at least a first generation LMSBS. Torque capability appears limited principally by current electromagnet technology.

  7. Finite element techniques in computational time series analysis of turbulent flows

    NASA Astrophysics Data System (ADS)

    Horenko, I.

    2009-04-01

    In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical

  8. Probabilistic Finite Element: Variational Theory

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.

    1985-01-01

    The goal of this research is to provide techniques which are cost-effective and enable the engineer to evaluate the effect of uncertainties in complex finite element models. Embedding the probabilistic aspects in a variational formulation is a natural approach. In addition, a variational approach to probabilistic finite elements enables it to be incorporated within standard finite element methodologies. Therefore, once the procedures are developed, they can easily be adapted to existing general purpose programs. Furthermore, the variational basis for these methods enables them to be adapted to a wide variety of structural elements and to provide a consistent basis for incorporating probabilistic features in many aspects of the structural problem. Tasks concluded include the theoretical development of probabilistic variational equations for structural dynamics, the development of efficient numerical algorithms for probabilistic sensitivity displacement and stress analysis, and integration of methodologies into a pilot computer code.

  9. Computed-tomography scan-based finite element analysis of stress distribution in premolars restored with composite resin

    NASA Astrophysics Data System (ADS)

    Kantardžić, I.; Vasiljević, D.; Blažić, L.; Puškar, T.; Tasić, M.

    2012-05-01

    Mechanical properties of restorative material have an effect on stress distribution in the tooth structure and the restorative material during mastication. The aim of this study was to investigate the influence of restorative materials with different moduli of elasticity on stress distribution in the three-dimensional (3D) solid tooth model. Computed tomography scan data of human maxillary second premolars were used for 3D solid model generation. Four composite resins with a modulus of elasticity of 6700, 9500, 14 100 and 21 000 MPa were considered to simulate four different clinical direct restoration types. Each model was subjected to a resulting force of 200 N directed to the occlusal surface, and stress distribution and maximal von Mises stresses were calculated using finite-element analysis. We found that the von Mises stress values and stress distribution in tooth structures did not vary considerably with changing the modulus of elasticity of restorative material.

  10. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

    SciTech Connect

    Hayes, S.L.

    1993-12-01

    SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user`s manual. A sample calculation made with SAFE is included to highlight some of the code`s features. Complete input and output files for the sample problem are provided.

  11. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still

  12. A computationally efficient finite element model with perfectly matched layers applied to scattering from axially symmetric objects.

    PubMed

    Zampolli, Mario; Tesei, Alessandra; Jensen, Finn B; Malm, Nils; Blottman, John B

    2007-09-01

    A frequency-domain finite-element (FE) technique for computing the radiation and scattering from axially symmetric fluid-loaded structures subject to a nonsymmetric forcing field is presented. The Berenger perfectly matched layer (PML), applied directly at the fluid-structure interface, makes it possible to emulate the Sommerfeld radiation condition using FE meshes of minimal size. For those cases where the acoustic field is computed over a band of frequencies, the meshing process is simplified by the use of a wavelength-dependent rescaling of the PML coordinates. Quantitative geometry discretization guidelines are obtained from a priori estimates of small-scale structural wavelengths, which dominate the acoustic field at low to mid frequencies. One particularly useful feature of the PML is that it can be applied across the interface between different fluids. This makes it possible to use the present tool to solve problems where the radiating or scattering objects are located inside a layered fluid medium. The proposed technique is verified by comparison with analytical solutions and with validated numerical models. The solutions presented show close agreement for a set of test problems ranging from scattering to underwater propagation. PMID:17927408

  13. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    NASA Astrophysics Data System (ADS)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a

  14. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    SciTech Connect

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-30

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo{sup 99} used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 10{sup 6} cm{sup −1}) in a tube, their delta

  15. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    SciTech Connect

    Williams, P.T.

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.

  16. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  17. Determination of dominant fibre orientations in fibre-reinforced high-strength concrete elements based on computed tomography scans

    NASA Astrophysics Data System (ADS)

    Vicente, Miguel A.; González, Dorys C.; Mínguez, Jesús

    2014-04-01

    Computed tomography (CT) is a nondestructive technique, based on absorbing X-rays, that permits the visualisation of the internal structure of materials in micron-range resolution. In this paper, the CT scan is used to determine the position and orientation of the fibres in steel fibre-reinforced high-strength concrete elements. The aim of this paper was to present a numerical procedure, automated through a MATLAB routine specially developed by the authors, which enables, fast and reliable, to obtain the orientation of each and every one of the fibres and their centre of gravity. The procedure shown is directly extrapolated to any type of fibre-reinforced material, only if there is a wide difference between density of fibres and density of matrix. The mathematical basis of this procedure is very simple and robust. The result is a fast algorithm and a routine easy to use. In addition, the validation tests show that the error is almost zero. This algorithm can help the industry to implement the technology of CT in the protocols of product quality control.

  18. A mixed finite element procedure of gradient Cosserat continuum for second-order computational homogenisation of granular materials

    NASA Astrophysics Data System (ADS)

    Li, Xikui; Liang, Yuanbo; Duan, Qinglin; Schrefler, B. A.; Du, Youyao

    2014-11-01

    A mixed finite element (FE) procedure of the gradient Cosserat continuum for the second-order computational homogenisation of granular materials is presented. The proposed mixed FE is developed based on the Hu-Washizu variational principle. Translational displacements, microrotations, and displacement gradients with Lagrange multipliers are taken as the independent nodal variables. The tangent stiffness matrix of the mixed FE is formulated. The advantage of the gradient Cosserat continuum model in capturing the meso-structural size effect is numerically demonstrated. Patch tests are specially designed and performed to validate the mixed FE formulations. A numerical example is presented to demonstrate the performance of the mixed FE procedure in the simulation of strain softening and localisation phenomena, while without the need to specify the macroscopic phenomenological constitutive relationship and material failure model. The meso-structural mechanisms of the macroscopic failure of granular materials are detected, i.e. significant development of dissipative sliding and rolling frictions among particles in contacts, resulting in the loss of contacts.

  19. Low-dose computed tomography screening for lung cancer in a clinical setting: essential elements of a screening program.

    PubMed

    McKee, Brady J; McKee, Andrea B; Kitts, Andrea Borondy; Regis, Shawn M; Wald, Christoph

    2015-03-01

    The purpose of this article is to review clinical computed tomography (CT) lung screening program elements essential to safely and effectively manage the millions of Americans at high risk for lung cancer expected to enroll in lung cancer screening programs over the next 3 to 5 years. To optimize the potential net benefit of CT lung screening and facilitate medical audits benchmarked to national quality standards, radiologists should interpret these examinations using a validated structured reporting system such as Lung-RADS. Patient and physician educational outreach should be enacted to support an informed and shared decision-making process without creating barriers to screening access. Programs must integrate smoking cessation interventions to maximize the clinical efficacy and cost-effectiveness of screening. At an institutional level, budgets should account for the necessary expense of hiring and/or training qualified support staff and equipping them with information technology resources adequate to enroll and track patients accurately over decades of future screening evaluation. At a national level, planning should begin on ways to accommodate the upcoming increased demand for physician services in fields critical to the success of CT lung screening such as diagnostic radiology and thoracic surgery. Institutions with programs that follow these specifications will be well equipped to meet the significant oncoming demand for CT lung screening services and bestow clinical benefits on their patients equal to or beyond what was observed in the National Lung Screening Trial. PMID:25658476

  20. Computational discovery of soybean promoter cis-regulatory elements for the construction of soybean cyst nematode-inducible synthetic promoters.

    PubMed

    Liu, Wusheng; Mazarei, Mitra; Peng, Yanhui; Fethe, Michael H; Rudis, Mary R; Lin, Jingyu; Millwood, Reginald J; Arelli, Prakash R; Stewart, Charles Neal

    2014-10-01

    Computational methods offer great hope but limited accuracy in the prediction of functional cis-regulatory elements; improvements are needed to enable synthetic promoter design. We applied an ensemble strategy for de novo soybean cyst nematode (SCN)-inducible motif discovery among promoters of 18 co-expressed soybean genes that were selected from six reported microarray studies involving a compatible soybean-SCN interaction. A total of 116 overlapping motif regions (OMRs) were discovered bioinformatically that were identified by at least four out of seven bioinformatic tools. Using synthetic promoters, the inducibility of each OMR or motif itself was evaluated by co-localization of gain of function of an orange fluorescent protein reporter and the presence of SCN in transgenic soybean hairy roots. Among 16 OMRs detected from two experimentally confirmed SCN-inducible promoters, 11 OMRs (i.e. 68.75%) were experimentally confirmed to be SCN-inducible, leading to the discovery of 23 core motifs of 5- to 7-bp length, of which 14 are novel in plants. We found that a combination of the three best tools (i.e. SCOPE, W-AlignACE and Weeder) could detect all 23 core motifs. Thus, this strategy is a high-throughput approach for de novo motif discovery in soybean and offers great potential for novel motif discovery and synthetic promoter engineering for any plant and trait in crop biotechnology. PMID:24893752

  1. Two finite element techniques for computing mode I stress intensity factors in two- or three-dimensional problems

    SciTech Connect

    Iskander, S.K.

    1981-02-01

    Two finite element (FE) approaches were used to calculate opening mode I stress intensity factors (K/sub I/) in two- or three-dimensional (2-D and 3-D) problems for the Heavy-Section Steel Technology (HSST) program. For problems that can be modeled in two dimensions, two techniques were used. One of these may be termed an ''energy release rate'' technique, and the other is based on the classical near-tip displacement and stress field equations. For three-dimensional problems, only the latter technique was used. In the energy release technique, K/sub I/ is calculated as the change in potential energy of the structure due to a small change in crack length. The potential energy is calculated by the FE method but without completely solving the system of linear equations for the displacements. Furthermore, the system of linear equations is only slightly perturbed by the change in crack length and, therefore, many computations need not be repeated for the second structure with the slight change in crack length. Implementation of these last two items has resulted in considerable savings in the calculation of K/sub I/ as compared to two complete FE analyses. These ideas are incorporated in the FMECH code. The accuracy of the methods has been checked by comparing the results of the two approaches with each other and with closed form solutions. It is estimated that the accuracy of the results is about +-5%.

  2. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

    DOEpatents

    Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

    2001-01-16

    Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of

  3. Analysis of fish otoliths by electrothermal vaporization inductively coupled plasma mass spectrometry: aspects of precipitating otolith calcium with hydrofluoric acid for trace element determination.

    PubMed

    Arslan, Zikri

    2005-03-15

    A method is developed for determination of trace elements, including Ag, As, Cd, Co, Cr, Cu, Mn, Ni, Se, Tl and Zn, in fish otoliths by electrothermal vaporization inductively coupled plasma mass spectrometry (ETV-ICP-MS). Hydrofluoric acid was used to precipitate calcium resulting from acid dissolution of otolith calcium carbonate. Initial acidity of the sample solution influenced the precipitation efficiency of calcium fluoride. Up to 99.5% of Ca was precipitated in solutions that contained less than 2% (v/v) HNO(3). Recoveries of the elements obtained from spiked artificial otolith solutions were between 90 and 103%. Stabilization of the elements within the ETV cell was achieved with 0.3mug Pd/0.2mug Rh chemical modifier that also afforded optimum sensitivity for multielement determination. The method was validated by the analysis of a fish otolith reference material (CRM) of emperor snapper, and then applied to the determination of the trace elements in otoliths of several fish species captured in Raritan Bay, New Jersey. Results indicated that fish physiology and biological processes could influence the levels of Cu, Mn, Se and Zn in the otoliths of fish inhabiting a similar aqueous environment. Otolith concentrations of Cr and Ni did not show any significant differences among different species. Concentrations for Ag, As, Cd, Co and Tl were also not significantly different, but were very low indicating low affinity of otolith calcium carbonate to these elements. PMID:18969949

  4. Examining the Minimal Required Elements of a Computer-Tailored Intervention Aimed at Dietary Fat Reduction: Results of a Randomized Controlled Dismantling Study

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Oenema, Anke; Dagnelie, Pieter C.; Brug, Johannes

    2008-01-01

    This study investigated the minimally required feedback elements of a computer-tailored dietary fat reduction intervention to be effective in improving fat intake. In all 588 Healthy Dutch adults were randomly allocated to one of four conditions in an randomized controlled trial: (i) feedback on dietary fat intake [personal feedback (P feedback)],…

  5. The Use of Computer Games as an Educational Tool: Identification of Appropriate Game Types and Game Elements.

    ERIC Educational Resources Information Center

    Amory, Alan; Naicker, Kevin; Vincent, Jacky; Adams, Claudia

    1999-01-01

    Describes research with college students that investigated commercial game types and game elements to determine what would be suitable for education. Students rated logic, memory, visualization, and problem solving as important game elements that are used to develop a model that links pedagogical issues with game elements. (Author/LRW)

  6. Biological Aspects of Computer Virology

    NASA Astrophysics Data System (ADS)

    Vlachos, Vasileios; Spinellis, Diomidis; Androutsellis-Theotokis, Stefanos

    Recent malware epidemics proved beyond any doubt that frightful predictions of fast-spreading worms have been well founded. While we can identify and neutralize many types of malicious code, often we are not able to do that in a timely enough manner to suppress its uncontrolled propagation. In this paper we discuss the decisive factors that affect the propagation of a worm and evaluate their effectiveness.

  7. Computing Aspects of Interactive Video.

    ERIC Educational Resources Information Center

    Butcher, P. G.

    1986-01-01

    Describes design and production of the award-winning software used to control Great Britain's Open University Materials Science videodisc, the Teddy Bear Disc, which is used to teach undergraduate students about materials engineering. The disc is designed for use in one-week sessions, which students attend in July or August. (MBR)

  8. Numerical Stochastic Homogenization Method and Multiscale Stochastic Finite Element Method - A Paradigm for Multiscale Computation of Stochastic PDEs

    SciTech Connect

    X. Frank Xu

    2010-03-30

    Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will

  9. Modeling of Interior Ballistic Gas-Solid Flow Using a Coupled Computational Fluid Dynamics-Discrete Element Method.

    PubMed

    Cheng, Cheng; Zhang, Xiaobing

    2013-05-01

    In conventional models for two-phase reactive flow of interior ballistic, the dynamic collision phenomenon of particles is neglected or empirically simplified. However, the particle collision between particles may play an important role in dilute two-phase flow because the distribution of particles is extremely nonuniform. The collision force may be one of the key factors to influence the particle movement. This paper presents the CFD-DEM approach for simulation of interior ballistic two-phase flow considering the dynamic collision process. The gas phase is treated as a Eulerian continuum and described by a computational fluid dynamic method (CFD). The solid phase is modeled by discrete element method (DEM) using a soft sphere approach for the particle collision dynamic. The model takes into account grain combustion, particle-particle collisions, particle-wall collisions, interphase drag and heat transfer between gas and solid phases. The continuous gas phase equations are discretized in finite volume form and solved by the AUSM+-up scheme with the higher order accurate reconstruction method. Translational and rotational motions of discrete particles are solved by explicit time integrations. The direct mapping contact detection algorithm is used. The multigrid method is applied in the void fraction calculation, the contact detection procedure, and CFD solving procedure. Several verification tests demonstrate the accuracy and reliability of this approach. The simulation of an experimental igniter device in open air shows good agreement between the model and experimental measurements. This paper has implications for improving the ability to capture the complex physics phenomena of two-phase flow during the interior ballistic cycle and to predict dynamic collision phenomena at the individual particle scale. PMID:24891728

  10. Hydropower and Environmental Resource Assessment (HERA): a computational tool for the assessment of the hydropower potential of watersheds considering engineering and socio-environmental aspects.

    NASA Astrophysics Data System (ADS)

    Martins, T. M.; Kelman, R.; Metello, M.; Ciarlini, A.; Granville, A. C.; Hespanhol, P.; Castro, T. L.; Gottin, V. M.; Pereira, M. V. F.

    2015-12-01

    The hydroelectric potential of a river is proportional to its head and water flows. Selecting the best development alternative for Greenfield projects watersheds is a difficult task, since it must balance demands for infrastructure, especially in the developing world where a large potential remains unexplored, with environmental conservation. Discussions usually diverge into antagonistic views, as in recent projects in the Amazon forest, for example. This motivates the construction of a computational tool that will support a more qualified debate regarding development/conservation options. HERA provides the optimal head division partition of a river considering technical, economic and environmental aspects. HERA has three main components: (i) pre-processing GIS of topographic and hydrologic data; (ii) automatic engineering and equipment design and budget estimation for candidate projects; (iii) translation of division-partition problem into a mathematical programming model. By integrating an automatic calculation with geoprocessing tools, cloud computation and optimization techniques, HERA makes it possible countless head partition division alternatives to be intrinsically compared - a great advantage with respect to traditional field surveys followed by engineering design methods. Based on optimization techniques, HERA determines which hydro plants should be built, including location, design, technical data (e.g. water head, reservoir area and volume, engineering design (dam, spillways, etc.) and costs). The results can be visualized in the HERA interface, exported to GIS software, Google Earth or CAD systems. HERA has a global scope of application since the main input data area a Digital Terrain Model and water inflows at gauging stations. The objective is to contribute to an increased rationality of decisions by presenting to the stakeholders a clear and quantitative view of the alternatives, their opportunities and threats.

  11. Highly recurring sequence elements identified in eukaryotic DNAs by computer analysis are often homologous to regulatory sequences or protein binding sites.

    PubMed Central

    Bodnar, J W; Ward, D C

    1987-01-01

    We have used computer assisted dot matrix and oligonucleotide frequency analyses to identify highly recurring sequence elements of 7-11 base pairs in eukaryotic genes and viral DNAs. Such elements are found much more frequently than expected, often with an average spacing of a few hundred base pairs. Furthermore, the most abundant repetitive elements observed in the ovalbumin locus, the beta-globin gene cluster, the metallothionein gene and the viral genomes of SV40, polyoma, Herpes simplex-1 and Mouse Mammary Tumor Virus were sequences shown previously to be protein binding sites or sequences important for regulating gene expression. These sequences were present in both exons and introns as well as promoter regions. These observations suggest that such sequences are often highly overrepresented within the specific gene segments with which they are associated. Computer analysis of other genetic units, including viral genomes and oncogenes, has identified a number of highly recurring sequence elements that could serve similar regulatory or protein-binding functions. A model for the role of such reiterated sequence elements in DNA organization and function is presented. PMID:3822840

  12. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  13. Energy Finite Element Analysis for Computing the High Frequency Vibration of the Aluminum Testbed Cylinder and Correlating the Results to Test Data

    NASA Technical Reports Server (NTRS)

    Vlahopoulos, Nickolas

    2005-01-01

    The Energy Finite Element Analysis (EFEA) is a finite element based computational method for high frequency vibration and acoustic analysis. The EFEA solves with finite elements governing differential equations for energy variables. These equations are developed from wave equations. Recently, an EFEA method for computing high frequency vibration of structures either in vacuum or in contact with a dense fluid has been presented. The presence of fluid loading has been considered through added mass and radiation damping. The EFEA developments were validated by comparing EFEA results to solutions obtained by very dense conventional finite element models and solutions from classical techniques such as statistical energy analysis (SEA) and the modal decomposition method for bodies of revolution. EFEA results have also been compared favorably with test data for the vibration and the radiated noise generated by a large scale submersible vehicle. The primary variable in EFEA is defined as the time averaged over a period and space averaged over a wavelength energy density. A joint matrix computed from the power transmission coefficients is utilized for coupling the energy density variables across any discontinuities, such as change of plate thickness, plate/stiffener junctions etc. When considering the high frequency vibration of a periodically stiffened plate or cylinder, the flexural wavelength is smaller than the interval length between two periodic stiffeners, therefore the stiffener stiffness can not be smeared by computing an equivalent rigidity for the plate or cylinder. The periodic stiffeners must be regarded as coupling components between periodic units. In this paper, Periodic Structure (PS) theory is utilized for computing the coupling joint matrix and for accounting for the periodicity characteristics.

  14. An artificial-neural-network method for the identification of saturated turbogenerator parameters based on a coupled finite-element/state-space computational algorithm

    SciTech Connect

    Chaudhry, S.R.; Ahmed-Zaid, S.; Demerdash, N.A.

    1995-12-01

    An artificial neural network (ANN) is used in the identification of saturated synchronous machine parameters under diverse operating conditions. The training data base for the ANN is generated by a time-stepping coupled finite-element/state-space (CFE-SS) modeling technique which is used in the computation of the saturated parameters of a 20-kV, 733-MVA, 0.85 pf (lagging) turbogenerator at discrete load points in the P-Q capability plane for three different levels of terminal voltage. These computed parameters constitute a learning data base for a multilayer ANN structure which is successfully trained using the back-propagation algorithm. Results indicate that the trained ANN can identify saturated machine reactances for arbitrary load points in the P-Q plane with an error less than 2% of those values obtained directly from the CFE-SS algorithm. Thus, significant savings in computational time are obtained in such parameter computation tasks.

  15. The poro-viscoelastic properties of trabecular bone: a micro computed tomography-based finite element study.

    PubMed

    Sandino, Clara; McErlain, David D; Schipilow, John; Boyd, Steven K

    2015-04-01

    Bone is a porous structure with a solid phase that contains hydroxyapatite and collagen. Due to its composition, bone is often represented either as a poroelastic or as a viscoelastic material; however, the poro-viscoelastic formulation that allows integrating the effect of both the fluid flow and the collagen on the mechanical response of the tissue, has not been applied yet. The objective of this study was to develop a micro computed tomography (µCT)-based finite element (FE) model of trabecular bone that includes both the poroelastic and the viscoelastic nature of the tissue. Cubes of trabecular bone (N=25) from human distal tibia were scanned with µCT and stress relaxation experiments were conducted. The µCT images were the basis for sample specific FE models, and the stress relaxation experiments were simulated applying a poro-viscoelastic formulation. The model considers two scales of the tissue: the intertrabecular pore and the lacunar-canalicular pore scales. Independent viscoelastic and poroelastic models were also developed to determine their contribution to the poro-viscoelastic model. All the experiments exhibited a similar relaxation trend. The average reaction force before relaxation was 9.28 × 10(2)N (SD ± 5.11 × 10(2)N), and after relaxation was 4.69 × 10(2)N (SD ± 2.88 × 10(2)N). The slope of the regression line between the force before and after relaxation was 1.92 (R(2)=0.96). The poro-viscoelastic models captured 49% of the variability of the experimental data before relaxation and 33% after relaxation. The relaxation predicted with viscoelastic models was similar to the poro-viscoelastic ones; however, the poroelastic formulation underestimated the reaction force before relaxation. These data suggest that the contribution of viscoelasticity (fluid flow-independent mechanism) to the mechanical response of the tissue is significantly greater than the contribution of the poroelasticity (fluid flow-dependent mechanism). PMID:25591049

  16. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

    NASA Technical Reports Server (NTRS)

    Harder, R. L.

    1974-01-01

    The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

  17. Generic element processor (application to nonlinear analysis)

    NASA Technical Reports Server (NTRS)

    Stanley, Gary

    1989-01-01

    The focus here is on one aspect of the Computational Structural Mechanics (CSM) Testbed: finite element technology. The approach involves a Generic Element Processor: a command-driven, database-oriented software shell that facilitates introduction of new elements into the testbed. This shell features an element-independent corotational capability that upgrades linear elements to geometrically nonlinear analysis, and corrects the rigid-body errors that plague many contemporary plate and shell elements. Specific elements that have been implemented in the Testbed via this mechanism include the Assumed Natural-Coordinate Strain (ANS) shell elements, developed with Professor K. C. Park (University of Colorado, Boulder), a new class of curved hybrid shell elements, developed by Dr. David Kang of LPARL (formerly a student of Professor T. Pian), other shell and solid hybrid elements developed by NASA personnel, and recently a repackaged version of the workhorse shell element used in the traditional STAGS nonlinear shell analysis code. The presentation covers: (1) user and developer interfaces to the generic element processor, (2) an explanation of the built-in corotational option, (3) a description of some of the shell-elements currently implemented, and (4) application to sample nonlinear shell postbuckling problems.

  18. Verification of a non-hydrostatic dynamical core using the horizontal spectral element method and vertical finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-11-01

    The non-hydrostatic (NH) compressible Euler equations for dry atmosphere were solved in a simplified two-dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. By using horizontal SEM, which decomposes the physical domain into smaller pieces with a small communication stencil, a high level of scalability can be achieved. By using vertical FDM, an easy method for coupling the dynamics and existing physics packages can be provided. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind-biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative and integral terms. For temporal integration, a time-split, third-order Runge-Kutta (RK3) integration technique was applied. The Euler equations that were used here are in flux form based on the hydrostatic pressure vertical coordinate. The equations are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate was implemented in this model. We validated the model by conducting the widely used standard tests: linear hydrostatic mountain wave, tracer advection, and gravity wave over the Schär-type mountain, as well as density current, inertia-gravity wave, and rising thermal bubble. The results from these tests demonstrated that the model using the horizontal SEM and the vertical FDM is accurate and robust provided sufficient diffusion is applied. The results with various horizontal resolutions also showed convergence of second-order accuracy due to the accuracy of the time integration scheme and that of the vertical direction, although high-order basis functions were used in the horizontal. By using the 2-D slice model, we effectively showed that the combined spatial

  19. Aspects of Language.

    ERIC Educational Resources Information Center

    Bolinger, Dwight

    A survey of the substance of linguistics and of the activities of linguists is presented in an attempt to acquaint ordinary readers with the various aspects of la"guage. A discussion of the human tendency toward speech, of the traits of language, and of phonetic elements prepares the way for an analysis of the structure of languag e in terms of…

  20. Biomechanical aspects of segmented arch mechanics combined with power arm for controlled anterior tooth movement: A three-dimensional finite element study

    PubMed Central

    Ozaki, Hiroya; Tominaga, Jun-ya; Hamanaka, Ryo; Sumi, Mayumi; Chiang, Pao-Chang; Tanaka, Motohiro; Koga, Yoshiyuki

    2015-01-01

    The porpose of this study was to determine the optimal length of power arms for achieving controlled anterior tooth movement in segmented arch mechanics combined with power arm. A three-dimensional finite element method was applied for the simulation of en masse anterior tooth retraction in segmented power arm mechanics. The type of tooth movement, namely, the location of center of rotation of the maxillary central incisor in association with power arm length, was calculated after the retraction force was applied. When a 0.017 × 0.022-in archwire was inserted into the 0.018-in slot bracket, bodily movement was obtained at 9.1 mm length of power arm, namely, at the level of 1.8 mm above the center of resistance. In case a 0.018 × 0.025-in full-size archwire was used, bodily movement of the tooth was produced at the power arm length of 7.0 mm, namely, at the level of 0.3 mm below the center of resistance. Segmented arch mechanics required shorter length of power arms for achieving any type of controlled anterior tooth movement as compared to sliding mechanics. Therefore, this space closing mechanics could be widely applied even for the patients whose gingivobuccal fold is shallow. The segmented arch mechanics combined with power arm could provide higher amount of moment-to-force ratio sufficient for controlled anterior tooth movement without generating friction, and vertical forces when applying retraction force parallel to the occlusal plane. It is, therefore, considered that the segmented power arm mechanics has a simple appliance design and allows more efficient and controllable tooth movement. PMID:25610497

  1. Legal aspects.

    PubMed

    Escher, A

    1975-01-01

    The manufacture, application, use and disposal of fluorescent whitening agents (FWAs) may give rise to legal questions relating mainly to environmental protection and the effects on man and animals. In addition to legal aspects, certain commercial aspects such as the law of competition and the obligations of industry, including compensation for damage caused by FWAs, are discussed. PMID:1064546

  2. Effect of damper on overall and blade-element performance of a compressor rotor having a tip speed of 1151 feet per second and an aspect ratio of 3.6

    NASA Technical Reports Server (NTRS)

    Lewis, G. W.; Hager, R. D.

    1974-01-01

    The overall and blade-element performance of two configurations of a moderately high aspect ratio transonic compressor rotor are presented. The subject rotor has conventional blade dampers. The performance is compared with a rotor utilizing dual wire friction dampers. At design speed the subject achieved a pressure ratio of 1.52 and efficiency of 0.89 at a near design weight flow of 72.1 pounds per second. The rotor with wire dampers gave consistently higher pressure ratios at each speed, but efficiencies for the two rotors were about the same. Stall margin for the subject rotor was 20.4 percent, but for the wire damped rotor only 4.0 percent.

  3. Analysis of Vertebral Bone Strength, Fracture Pattern, and Fracture Location: A Validation Study Using a Computed Tomography-Based Nonlinear Finite Element Analysis

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is an advanced computer technique of structural stress analysis developed in engineering mechanics. Because the compressive behavior of vertebral bone shows nonlinear behavior, a nonlinear FEA should be utilized to analyze the clinical vertebral fracture. In this article, a computed tomography-based nonlinear FEA (CT/FEA) to analyze the vertebral bone strength, fracture pattern, and fracture location is introduced. The accuracy of the CT/FEA was validated by performing experimental mechanical testing with human cadaveric specimens. Vertebral bone strength and the minimum principal strain at the vertebral surface were accurately analyzed using the CT/FEA. The experimental fracture pattern and fracture location were also accurately simulated. Optimization of the element size was performed by assessing the accuracy of the CT/FEA, and the optimum element size was assumed to be 2 mm. It is expected that the CT/FEA will be valuable in analyzing vertebral fracture risk and assessing therapeutic effects on osteoporosis. PMID:26029476

  4. Analysis of vertebral bone strength, fracture pattern, and fracture location: a validation study using a computed tomography-based nonlinear finite element analysis.

    PubMed

    Imai, Kazuhiro

    2015-06-01

    Finite element analysis (FEA) is an advanced computer technique of structural stress analysis developed in engineering mechanics. Because the compressive behavior of vertebral bone shows nonlinear behavior, a nonlinear FEA should be utilized to analyze the clinical vertebral fracture. In this article, a computed tomography-based nonlinear FEA (CT/FEA) to analyze the vertebral bone strength, fracture pattern, and fracture location is introduced. The accuracy of the CT/FEA was validated by performing experimental mechanical testing with human cadaveric specimens. Vertebral bone strength and the minimum principal strain at the vertebral surface were accurately analyzed using the CT/FEA. The experimental fracture pattern and fracture location were also accurately simulated. Optimization of the element size was performed by assessing the accuracy of the CT/FEA, and the optimum element size was assumed to be 2 mm. It is expected that the CT/FEA will be valuable in analyzing vertebral fracture risk and assessing therapeutic effects on osteoporosis. PMID:26029476

  5. Coronary arterial dynamics computation with medical-image-based time-dependent anatomical models and element-based zero-stress state estimates

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Torii, Ryo; Takagi, Hirokazu; Tezduyar, Tayfun E.; Xu, Xiao Y.

    2014-10-01

    We propose a method for coronary arterial dynamics computation with medical-image-based time-dependent anatomical models. The objective is to improve the computational analysis of coronary arteries for better understanding of the links between the atherosclerosis development and mechanical stimuli such as endothelial wall shear stress and structural stress in the arterial wall. The method has two components. The first one is element-based zero-stress (ZS) state estimation, which is an alternative to prestress calculation. The second one is a "mixed ZS state" approach, where the ZS states for different elements in the structural mechanics mesh are estimated with reference configurations based on medical images coming from different instants within the cardiac cycle. We demonstrate the robustness of the method in a patient-specific coronary arterial dynamics computation where the motion of a thin strip along the arterial surface and two cut surfaces at the arterial ends is specified to match the motion extracted from the medical images.

  6. Administrative Aspects of Human Experimentation.

    ERIC Educational Resources Information Center

    Irvine, George W.

    1992-01-01

    The following administrative aspects of scientific experimentation with human subjects are discussed: the definition of human experimentation; the distinction between experimentation and treatment; investigator responsibility; documentation; the elements and principles of informed consent; and the administrator's role in establishing and…

  7. Field, model, and computer simulation study of some aspects of the origin and distribution of Colorado Plateau-type uranium deposits

    USGS Publications Warehouse

    Ethridge, F.G.; Sunada, D.K.; Tyler, Noel; Andrews, Sarah

    1982-01-01

    Numerous hypotheses have been proposed to account for the nature and distribution of tabular uranium and vanadium-uranium deposits of the Colorado Plateau. In one of these hypotheses it is suggested that the deposits resulted from geochemical reactions at the interface between a relatively stagnant groundwater solution and a dynamic, ore-carrying groundwater solution which permeated the host sandstones (Shawe, 1956; Granger, et al., 1961; Granger, 1968, 1976; and Granger and Warren, 1979). The study described here was designed to investigate some aspects of this hypothesis, particularly the nature of fluid flow in sands and sandstones, the nature and distribution of deposits, and the relations between the deposits and the host sandstones. The investigation, which was divided into three phases, involved physical model, field, and computer simulation studies. During the initial phase of the investigation, physical model studies were conducted in porous-media flumes. These studies verified the fact that humic acid precipitates could form at the interface between a humic acid solution and a potassium aluminum sulfate solution and that the nature and distribution of these precipitates were related to flow phenomena and to the nature and distribution of the host porous-media. During the second phase of the investigation field studies of permeability and porosity patterns in Holocene stream deposits were investigated and the data obtained were used to design more realistic porous media models. These model studies, which simulated actual stream deposits, demonstrated that precipitates possess many characteristics, in terms of their nature and relation to host sandstones, that are similar to ore deposits of the Colorado Plateau. The final phase of the investigation involved field studies of actual deposits, additional model studies in a large indoor flume, and computer simulation studies. The field investigations provided an up-to-date interpretation of the depositional

  8. Finite elements: Theory and application

    NASA Technical Reports Server (NTRS)

    Dwoyer, D. L. (Editor); Hussaini, M. Y. (Editor); Voigt, R. G. (Editor)

    1988-01-01

    Recent advances in FEM techniques and applications are discussed in reviews and reports presented at the ICASE/LaRC workshop held in Hampton, VA in July 1986. Topics addressed include FEM approaches for partial differential equations, mixed FEMs, singular FEMs, FEMs for hyperbolic systems, iterative methods for elliptic finite-element equations on general meshes, mathematical aspects of FEMS for incompressible viscous flows, and gradient weighted moving finite elements in two dimensions. Consideration is given to adaptive flux-corrected FEM transport techniques for CFD, mixed and singular finite elements and the field BEM, p and h-p versions of the FEM, transient analysis methods in computational dynamics, and FEMs for integrated flow/thermal/structural analysis.

  9. 3-D magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on SMP computers - Part I: forward problem and parameter Jacobians

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    We have developed an algorithm, which we call HexMT, for 3-D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permit incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used throughout, including the forward solution, parameter Jacobians and model parameter update. In Part I, the forward simulator and Jacobian calculations are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequencies or small material admittivities, the E-field requires divergence correction. With the help of Hodge decomposition, the correction may be applied in one step after the forward solution is calculated. This allows accurate E-field solutions in dielectric air. The system matrix factorization and source vector solutions are computed using the MKL PARDISO library, which shows good scalability through 24 processor cores. The factorized matrix is used to calculate the forward response as well as the Jacobians of electromagnetic (EM) field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure, several synthetic topographic models and the natural topography of Mount Erebus in Antarctica. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of EM waves normal to the slopes at high frequencies. Run-time tests of the parallelized algorithm indicate that for meshes as large as 176 × 176 × 70 elements, MT forward responses and Jacobians can be calculated in ˜1.5 hr per frequency. Together with an efficient inversion parameter step described in Part II, MT inversion problems of 200-300 stations are computable with total run times

  10. Finite-element nonlinear transient response computer programs PLATE 1 and CIVM-PLATE 1 for the analysis of panels subjected to impulse or impact loads

    NASA Technical Reports Server (NTRS)

    Spilker, R. L.; Witmer, E. A.; French, S. E.; Rodal, J. J. A.

    1980-01-01

    Two computer programs are described for predicting the transient large deflection elastic viscoplastic responses of thin single layer, initially flat unstiffened or integrally stiffened, Kirchhoff-Lov ductile metal panels. The PLATE 1 program pertains to structural responses produced by prescribed externally applied transient loading or prescribed initial velocity distributions. The collision imparted velocity method PLATE 1 program concerns structural responses produced by impact of an idealized nondeformable fragment. Finite elements are used to represent the structure in both programs. Strain hardening and strain rate effects of initially isotropic material are considered.

  11. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  12. Survey of Unsteady Computational Aerodynamics for Horizontal Axis Wind Turbines

    NASA Astrophysics Data System (ADS)

    Frunzulicǎ, F.; Dumitrescu, H.; Cardoş, V.

    2010-09-01

    We present a short review of aerodynamic computational models for horizontal axis wind turbines (HAWT). Models presented have a various level of complexity to calculate aerodynamic loads on rotor of HAWT, starting with the simplest blade element momentum (BEM) and ending with the complex model of Navier-Stokes equations. Also, we present some computational aspects of these models.

  13. Recommended data elements for the descriptive cataloging of computer-based educational materials in the health sciences.

    PubMed

    Lyon-Hartmann, B; Goldstein, C M

    1978-01-01

    A large part of the mission of the National Library of Medicine is to collect, index, and disseminate the world's biomedical literature. Until recently, this related only to serial and monographic material, but as new forms of information appear responsibility for bibliographic control of these also must be assumed by the National Library of Medicine. This paper briefly describes the type of information that will be necessary before descriptive cataloging of computer-based educational materials can be attempted. PMID:10306980

  14. Robust and portable capacity computing method for many finite element analyses of a high-fidelity crustal structure model aimed for coseismic slip estimation

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hirahara, Kazuro; Hyodo, Mamoru; Hori, Takane; Hori, Muneo

    2016-09-01

    Computation of many Green's functions (GFs) in finite element (FE) analyses of crustal deformation is an essential technique in inverse analyses of coseismic slip estimations. In particular, analysis based on a high-resolution FE model (high-fidelity model) is expected to contribute to the construction of a community standard FE model and benchmark solution. Here, we propose a naive but robust and portable capacity computing method to compute many GFs using a high-fidelity model, assuming that various types of PC clusters are used. The method is based on the master-worker model, implemented using the Message Passing Interface (MPI), to perform robust and efficient input/output operations. The method was applied to numerical experiments of coseismic slip estimation in the Tohoku region of Japan; comparison of the estimated results with those generated using lower-fidelity models revealed the benefits of using a high-fidelity FE model in coseismic slip distribution estimation. Additionally, the proposed method computes several hundred GFs more robustly and efficiently than methods without the master-worker model and MPI.

  15. Object-oriented design and implementation of CFDLab: a computer-assisted learning tool for fluid dynamics using dual reciprocity boundary element methodology

    NASA Astrophysics Data System (ADS)

    Friedrich, J.

    1999-08-01

    As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.

  16. Computationally Efficient Finite Element Analysis Method Incorporating Virtual Equivalent Projected Model For Metallic Sandwich Panels With Pyramidal Truss Cores

    SciTech Connect

    Seong, Dae-Yong; Jung, Chang Gyun; Yang, Dong-Yol

    2007-05-17

    Metallic sandwich panels composed of two face sheets and cores with low relative density have lightweight characteristics and various static and dynamic load bearing functions. To predict the forming characteristics, performance, and formability of these structured materials, full 3D modeling and analysis involving tremendous computational time and memory are required. Some constitutive continuum models including homogenization approaches to solve these problems have limitations with respect to the prediction of local buckling of face sheets and inner structures. In this work, a computationally efficient FE-analysis method incorporating a virtual equivalent projected model that enables the simulation of local buckling modes is newly introduced for analysis of metallic sandwich panels. Two-dimensional models using the projected shapes of 3D structures have the same equivalent elastic-plastic properties with original geometries that have anisotropic stiffness, yield strength, and hardening function. The sizes and isotropic properties of the virtual equivalent projected model have been estimated analytically with the same equivalent properties and face buckling strength of the full model. The 3-point bending processes with quasi-two-dimensional loads and boundary conditions are simulated to establish the validity of the proposed method. The deformed shapes and load-displacement curves of the virtual equivalent projected model are found to be almost the same as those of a full three-dimensional FE-analysis while reducing computational time drastically.

  17. Wall shear stress calculations in space-time finite element computation of arterial fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Moorman, Creighton; Wright, Samuel; Christopher, Jason; Tezduyar, Tayfun E.

    2009-10-01

    The stabilized space-time fluid-structure interaction (SSTFSI) technique was applied to arterial FSI problems soon after its development by the Team for Advanced Flow Simulation and Modeling. The SSTFSI technique is based on the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) formulation and is supplemented with a number of special techniques developed for arterial FSI. The special techniques developed in the recent past include a recipe for pre-FSI computations that improve the convergence of the FSI computations, using an estimated zero-pressure arterial geometry, Sequentially Coupled Arterial FSI technique, using layers of refined fluid mechanics mesh near the arterial walls, and a special mapping technique for specifying the velocity profile at inflow boundaries with non-circular shape. In this paper we introduce some additional special techniques, related to the projection of fluid-structure interface stresses, calculation of the wall shear stress (WSS), and calculation of the oscillatory shear index. In the test computations reported here, we focus on WSS calculations in FSI modeling of a patient-specific middle cerebral artery segment with aneurysm. Two different structural mechanics meshes and three different fluid mechanics meshes are tested to investigate the influence of mesh refinement on the WSS calculations.

  18. Optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system

    SciTech Connect

    Shen, L.; Levine, S.H.; Catchen, G.L.

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  19. Prediction of acoustic radiation from axisymmetric surfaces with arbitrary boundary conditions using the boundary element method on a distributed computing system.

    PubMed

    Wright, Louise; Robinson, Stephen P; Humphrey, Victor F

    2009-03-01

    This paper presents a computational technique using the boundary element method for prediction of radiated acoustic waves from axisymmetric surfaces with nonaxisymmetric boundary conditions. The aim is to predict the far-field behavior of underwater acoustic transducers based on their measured behavior in the near-field. The technique is valid for all wavenumbers and uses a volume integral method to calculate the singular integrals required by the boundary element formulation. The technique has been implemented on a distributed computing system to take advantage of its parallel nature, which has led to significant reductions in the time required to generate results. Measurement data generated by a pair of free-flooding underwater acoustic transducers encapsulated in a polyurethane polymer have been used to validate the technique against experiment. The dimensions of the outer surface of the transducers (including the polymer coating) were an outer diameter of 98 mm with an 18 mm wall thickness and a length of 92 mm. The transducers were mounted coaxially, giving an overall length of 185 mm. The cylinders had resonance frequencies at 13.9 and 27.5 kHz, and the data were gathered at these frequencies. PMID:19275294

  20. Computational simulation of the bone remodeling using the finite element method: an elastic-damage theory for small displacements

    PubMed Central

    2013-01-01

    Background The resistance of the bone against damage by repairing itself and adapting to environmental conditions is its most important property. These adaptive changes are regulated by physiological process commonly called the bone remodeling. Better understanding this process requires that we apply the theory of elastic-damage under the hypothesis of small displacements to a bone structure and see its mechanical behavior. Results The purpose of the present study is to simulate a two dimensional model of a proximal femur by taking into consideration elastic-damage and mechanical stimulus. Here, we present a mathematical model based on a system of nonlinear ordinary differential equations and we develop the variational formulation for the mechanical problem. Then, we implement our mathematical model into the finite element method algorithm to investigate the effect of the damage. Conclusion The results are consistent with the existing literature which shows that the bone stiffness drops in damaged bone structure under mechanical loading. PMID:23663260

  1. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

    SciTech Connect

    Carey, D.C.

    1999-12-09

    TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.

  2. A 2-D spectral-element method for computing spherical-earth seismograms-II. Waves in solid-fluid media

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, Tarje; Fournier, Alexandre; Dahlen, F. A.

    2008-09-01

    We portray a dedicated spectral-element method to solve the elastodynamic wave equation upon spherically symmetric earth models at the expense of a 2-D domain. Using this method, 3-D wavefields of arbitrary resolution may be computed to obtain Fréchet sensitivity kernels, especially for diffracted arrivals. The meshing process is presented for varying frequencies in terms of its efficiency as measured by the total number of elements, their spacing variations and stability criteria. We assess the mesh quantitatively by defining these numerical parameters in a general non-dimensionalized form such that comparisons to other grid-based methods are straightforward. Efficient-mesh generation for the PREM example and a minimum-messaging domain decomposition and parallelization strategy lay foundations for waveforms up to frequencies of 1 Hz on moderate PC clusters. The discretization of fluid, solid and respective boundary regions is similar to previous spectral-element implementations, save for a fluid potential formulation that incorporates the density, thereby yielding identical boundary terms on fluid and solid sides. We compare the second-order Newmark time extrapolation scheme with a newly implemented fourth-order symplectic scheme and argue in favour of the latter in cases of propagation over many wavelengths due to drastic accuracy improvements. Various validation examples such as full moment-tensor seismograms, wavefield snapshots, and energy conservation illustrate the favourable behaviour and potential of the method.

  3. A finite-element approach to the direct computation of relative cardiovascular pressure from time-resolved MR velocity data

    PubMed Central

    Krittian, Sebastian B.S.; Lamata, Pablo; Michler, Christian; Nordsletten, David A.; Bock, Jelena; Bradley, Chris P.; Pitcher, Alex; Kilner, Philip J.; Markl, Michael; Smith, Nic P.

    2012-01-01

    The evaluation of cardiovascular velocities, their changes through the cardiac cycle and the consequent pressure gradients has the capacity to improve understanding of subject-specific blood flow in relation to adjacent soft tissue movements. Magnetic resonance time-resolved 3D phase contrast velocity acquisitions (4D flow) represent an emerging technology capable of measuring the cyclic changes of large scale, multi-directional, subject-specific blood flow. A subsequent evaluation of pressure differences in enclosed vascular compartments is a further step which is currently not directly available from such data. The focus of this work is to address this deficiency through the development of a novel simulation workflow for the direct computation of relative cardiovascular pressure fields. Input information is provided by enhanced 4D flow data and derived MR domain masking. The underlying methodology shows numerical advantages in terms of robustness, global domain composition, the isolation of local fluid compartments and a treatment of boundary conditions. This approach is demonstrated across a range of validation examples which are compared with analytic solutions. Four subject-specific test cases are subsequently run, showing good agreement with previously published calculations of intra-vascular pressure differences. The computational engine presented in this work contributes to non-invasive access to relative pressure fields, incorporates the effects of both blood flow acceleration and viscous dissipation, and enables enhanced evaluation of cardiovascular blood flow. PMID:22626833

  4. Discrete Element Modeling

    SciTech Connect

    Morris, J; Johnson, S

    2007-12-03

    The Distinct Element Method (also frequently referred to as the Discrete Element Method) (DEM) is a Lagrangian numerical technique where the computational domain consists of discrete solid elements which interact via compliant contacts. This can be contrasted with Finite Element Methods where the computational domain is assumed to represent a continuum (although many modern implementations of the FEM can accommodate some Distinct Element capabilities). Often the terms Discrete Element Method and Distinct Element Method are used interchangeably in the literature, although Cundall and Hart (1992) suggested that Discrete Element Methods should be a more inclusive term covering Distinct Element Methods, Displacement Discontinuity Analysis and Modal Methods. In this work, DEM specifically refers to the Distinct Element Method, where the discrete elements interact via compliant contacts, in contrast with Displacement Discontinuity Analysis where the contacts are rigid and all compliance is taken up by the adjacent intact material.

  5. Progressive Damage Analysis of Laminated Composite (PDALC)-A Computational Model Implemented in the NASA COMET Finite Element Code

    NASA Technical Reports Server (NTRS)

    Lo, David C.; Coats, Timothy W.; Harris, Charles E.; Allen, David H.

    1996-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete list of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occur during the load history. Residual strength predictions made with this information compared favorably with experimental measurements.

  6. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  7. Combined magnetic vector-scalar potential finite element computation of 3D magnetic field and performance of modified Lundell alternators in Space Station applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Ren H.

    1991-01-01

    A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.

  8. Advanced computer technology - An aspect of the Terminal Configured Vehicle program. [air transportation capacity, productivity, all-weather reliability and noise reduction improvements

    NASA Technical Reports Server (NTRS)

    Berkstresser, B. K.

    1975-01-01

    NASA is conducting a Terminal Configured Vehicle program to provide improvements in the air transportation system such as increased system capacity and productivity, increased all-weather reliability, and reduced noise. A typical jet transport has been equipped with highly flexible digital display and automatic control equipment to study operational techniques for conventional takeoff and landing aircraft. The present airborne computer capability of this aircraft employs a multiple computer simple redundancy concept. The next step is to proceed from this concept to a reconfigurable computer system which can degrade gracefully in the event of a failure, adjust critical computations to remaining capacity, and reorder itself, in the case of transients, to the highest order of redundancy and reliability.

  9. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    SciTech Connect

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  10. NORIA-SP: A finite element computer program for analyzing liquid water transport in porous media; Yucca Mountain Site Characterization Project

    SciTech Connect

    Hopkins, P.L.; Eaton, R.R.; Bixler, N.E.

    1991-12-01

    A family of finite element computer programs has been developed at Sandia National Laboratories (SNL) most recently, NORIA-SP. The original NORIA code solves a total of four transport equations simultaneously: liquid water, water vapor, air, and energy. Consequently, use of NORIA is computer-intensive. Since many of the applications for which NORIA is used are isothermal, we decided to ``strip`` the original four-equation version, leaving only the liquid water equation. This single-phase version is NORIA-SP. The primary intent of this document is to provide the user of NORIA-SP an accurate user`s manual. Consequently, the reader should refer to the NORIA manual if additional detail is required regarding the equation development and finite element methods used. The single-equation version of the NORIA code (NORIA-SP) has been used most frequently for analyzing various hydrological scenarios for the potential underground nuclear waste repository at Yucca Mountain in western Nevada. These analyses are generally performed assuming a composite model to represent the fractured geologic media. In this model the material characteristics of the matrix and the fractures are area weighted to obtain equivalent material properties. Pressure equilibrium between the matrix and fractures is assumed so a single conservation equation can be solved. NORIA-SP is structured to accommodate the composite model. The equations for water velocities in both the rock matrix and the fractures are presented. To use the code for problems involving a single, nonfractured porous material, the user can simply set the area of the fractures to zero.

  11. Material Characterization and Geometric Segmentation of a Composite Structure Using Microfocus X-Ray Computed Tomography Image-Based Finite Element Modeling

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Roth, D. J.; Cotton, R.; Studor, George F.; Christiansen, Eric; Young, P. C.

    2011-01-01

    This study utilizes microfocus x-ray computed tomography (CT) slice sets to model and characterize the damage locations and sizes in thermal protection system materials that underwent impact testing. ScanIP/FE software is used to visualize and process the slice sets, followed by mesh generation on the segmented volumetric rendering. Then, the local stress fields around several of the damaged regions are calculated for realistic mission profiles that subject the sample to extreme temperature and other severe environmental conditions. The resulting stress fields are used to quantify damage severity and make an assessment as to whether damage that did not penetrate to the base material can still result in catastrophic failure of the structure. It is expected that this study will demonstrate that finite element modeling based on an accurate three-dimensional rendered model from a series of CT slices is an essential tool to quantify the internal macroscopic defects and damage of a complex system made out of thermal protection material. Results obtained showing details of segmented images; three-dimensional volume-rendered models, finite element meshes generated, and the resulting thermomechanical stress state due to impact loading for the material are presented and discussed. Further, this study is conducted to exhibit certain high-caliber capabilities that the nondestructive evaluation (NDE) group at NASA Glenn Research Center can offer to assist in assessing the structural durability of such highly specialized materials so improvements in their performance and capacities to handle harsh operating conditions can be made.

  12. Enhancement of photoacoustic tomography by ultrasonic computed tomography based on optical excitation of elements of a full-ring transducer array

    PubMed Central

    Xia, Jun; Huang, Chao; Maslov, Konstantin; Anastasio, Mark A.; Wang, Lihong V.

    2014-01-01

    Photoacoustic computed tomography (PACT) is a hybrid technique that combines optical excitation and ultrasonic detection to provide high resolution images in deep tissues. In the image reconstruction, a constant speed of sound (SOS) is normally assumed. This assumption, however, is often not strictly satisfied in deep tissue imaging, due to acoustic heterogeneities within the object and between the object and coupling medium. If these heterogeneities are not accounted for, they will cause distortions and artifacts in the reconstructed images. In this paper, we incorporated ultrasonic computed tomography (USCT), which measures the SOS distribution within the object, into our full-ring array PACT system. Without the need for ultrasonic transmitting electronics, USCT was performed using the same laser beam as for PACT measurement. By scanning the laser beam on the array surface, we can sequentially fire different elements. As a first demonstration of the system, we studied the effect of acoustic heterogeneities on photoacoustic vascular imaging. We verified that constant SOS is a reasonable approximation when the SOS variation is small. When the variation is large, distortion will be observed in the periphery of the object, especially in the tangential direction. PMID:24104670

  13. Language Arts Scenarios Using "Aspects."

    ERIC Educational Resources Information Center

    Hamstra, Diane

    1993-01-01

    Outlines how the computer groupware program for Macintosh computers called "Aspects" was used at every level of a K-12 school system to enhance collaborative writing and writing skills. Imagines possible future uses of the program in linking classrooms from different areas of the world. (HB)

  14. Computationally efficient magnetic resonance imaging based surface contact modeling as a tool to evaluate joint injuries and outcomes of surgical interventions compared to finite element modeling.

    PubMed

    Johnson, Joshua E; Lee, Phil; McIff, Terence E; Toby, E Bruce; Fischer, Kenneth J

    2014-04-01

    Joint injuries and the resulting posttraumatic osteoarthritis (OA) are a significant problem. There is still a need for tools to evaluate joint injuries, their effect on joint mechanics, and the relationship between altered mechanics and OA. Better understanding of injuries and their relationship to OA may aid in the development or refinement of treatment methods. This may be partially achieved by monitoring changes in joint mechanics that are a direct consequence of injury. Techniques such as image-based finite element modeling can provide in vivo joint mechanics data but can also be laborious and computationally expensive. Alternate modeling techniques that can provide similar results in a computationally efficient manner are an attractive prospect. It is likely possible to estimate risk of OA due to injury from surface contact mechanics data alone. The objective of this study was to compare joint contact mechanics from image-based surface contact modeling (SCM) and finite element modeling (FEM) in normal, injured (scapholunate ligament tear), and surgically repaired radiocarpal joints. Since FEM is accepted as the gold standard to evaluate joint contact stresses, our assumption was that results obtained using this method would accurately represent the true value. Magnetic resonance images (MRI) of the normal, injured, and postoperative wrists of three subjects were acquired when relaxed and during functional grasp. Surface and volumetric models of the radiolunate and radioscaphoid articulations were constructed from the relaxed images for SCM and FEM analyses, respectively. Kinematic boundary conditions were acquired from image registration between the relaxed and grasp images. For the SCM technique, a linear contact relationship was used to estimate contact outcomes based on interactions of the rigid articular surfaces in contact. For FEM, a pressure-overclosure relationship was used to estimate outcomes based on deformable body contact interactions. The SCM

  15. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic

  16. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  17. Synthesis, spectroscopic, cytotoxic aspects and computational study of N-(pyridine-2-ylmethylene)benzo[d]thiazol-2-amine Schiff base and some of its transition metal complexes

    NASA Astrophysics Data System (ADS)

    Abd El-Aziz, Dina M.; Etaiw, Safaa Eldin H.; Ali, Elham A.

    2013-09-01

    N-(pyridine-2-ylmethylene)benzo[d]thiazol-2-amine Schiff base (L) and its Cu(II), Fe(III), Co(II), Ni(II) and Zn(II) complexes were synthesized and characterized by a set of chemical and spectroscopic measurements using elemental analysis, electrical conductance, mass spectra, magnetic susceptibility and spectral techniques (IR, UV-Vis, 1H NMR). Elemental and mass spectrometric data are consistent with the proposed formula. IR spectra confirm the bidentate nature of the Schiff base ligand. The octahedral geometry around Cu(II), Fe(III), Ni(II) and Zn(II) as well as tetrahedral geometry around Co(II) were suggested by UV-Vis spectra and magnetic moment data. The thermal degradation behavior of the Schiff base and its complexes was investigated by thermogravimetric analysis. The structure of the Schiff base and its transition metal complexes was also theoretically studied using molecular mechanics (MM+). The obtained structures were minimized with a semi-empirical (PM3) method. The in vitro antitumor activity of the synthesized compounds was studied. The Zn-complex exhibits significant decrease in surviving fraction of breast carcinoma (MCF 7), liver carcinoma (HEPG2), colon carcinoma (HCT116) and larynx carcinoma (HEP2) cell lines human cancer.

  18. POTHMF: A program for computing potential curves and matrix elements of the coupled adiabatic radial equations for a hydrogen-like atom in a homogeneous magnetic field

    NASA Astrophysics Data System (ADS)

    Chuluunbaatar, O.; Gusev, A. A.; Gerdt, V. P.; Rostovtsev, V. A.; Vinitsky, S. I.; Abrashkevich, A. G.; Kaschiev, M. S.; Serov, V. V.

    2008-02-01

    A FORTRAN 77 program is presented which calculates with the relative machine precision potential curves and matrix elements of the coupled adiabatic radial equations for a hydrogen-like atom in a homogeneous magnetic field. The potential curves are eigenvalues corresponding to the angular oblate spheroidal functions that compose adiabatic basis which depends on the radial variable as a parameter. The matrix elements of radial coupling are integrals in angular variables of the following two types: product of angular functions and the first derivative of angular functions in parameter, and product of the first derivatives of angular functions in parameter, respectively. The program calculates also the angular part of the dipole transition matrix elements (in the length form) expressed as integrals in angular variables involving product of a dipole operator and angular functions. Moreover, the program calculates asymptotic regular and irregular matrix solutions of the coupled adiabatic radial equations at the end of interval in radial variable needed for solving a multi-channel scattering problem by the generalized R-matrix method. Potential curves and radial matrix elements computed by the POTHMF program can be used for solving the bound state and multi-channel scattering problems. As a test desk, the program is applied to the calculation of the energy values, a short-range reaction matrix and corresponding wave functions with the help of the KANTBP program. Benchmark calculations for the known photoionization cross-sections are presented. Program summaryProgram title:POTHMF Catalogue identifier:AEAA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAA_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:8123 No. of bytes in distributed program, including test data

  19. Design of an Electrostatic Comb Actuator Based on Finite Element Method

    NASA Astrophysics Data System (ADS)

    Mon, Thet Thet; Ghazalli, Zakri; Ahmad, Asnul Hadi; Ismail, Mohd Fazli; Muhamad, Khairul Fikri

    2011-05-01

    Electrostatic comb actuators were designed using finite element modeling and analysis, so-called finite element method (FEM). Design objective was to generate maximum actuating force within the constraints. 2D and 3D FE models of the comb structures were developed in general-purpose FE code. The element geometries were 4-node plate element for 2D model and 8-node brick element for 3D models. Electrostatic field strength and voltage analysis of the FE models were performed to compute generated voltage and electrostatic force in the structure. Subsequently done was the structural analysis to examine structural response to the electrostatic force. The initial finite element model was verified with the published experimental result. Based on the amount of force generated and lateral deflection of the comb fingers, the best possible design of choice was determined. The finite element computations show that the comb structure having high aspect ratio with smaller gaps can provide higher actuation force.

  20. Recent advances in the application of computer-controlled optical finishing to produce very high-quality transmissive optical elements and windows

    NASA Astrophysics Data System (ADS)

    Askinazi, Joel; Estrin, Aleksandr; Green, Alan; Turner, Aaron N.

    2003-09-01

    Large aperture (20-inch diameter) sapphire optical windows have been identified as a key element of new and/or upgraded airborne electro-optical systems. These windows typically require a transmitted wave front error of much less than 0.1 waves rms @ 0.63 microns over 7 inch diameter sub-apertures. Large aperture (14-inch diameter by 4-inch thick) sapphire substrates have also been identified as a key optical element of the Laser Interferometer Gravitational Wave Observatory (LIGO). This project is under joint development by the California Institute of Technology (Caltech) and the Massachusetts Institute of Technology under cooperative agreement with the National Science foundation (NSF). These substrates are required to have a transmitted wave front error of 20 nm (0.032 waves) rms @ 0.63 microns over 6-inch sub-apertures with a desired error of 10 nm (0.016 waves) rms. Owing to the spatial variations in the optical index of refraction potentially anticipated within 20-inch diameter sapphire, thin (0.25 - 0.5-inch) window substrates, as well as within the 14-inch diameter by 4-inch thick substrates for the LIGO application, our experience tells us that the required transmitted wave front errors can not be achieved with standard optical finishing techniques as they can not readily compensate for errors introduced by inherent material characteristics. Computer controlled optical finishing has been identified as a key technology likely required to enable achievement of the required transmitted wave front errors. Goodrich has developed this technology and has previously applied it to finish high quality sapphire optical windows with a range of aperture sizes from 4-inch to 13-inch to achieve transmitted wavefront errors comparable to these new requirements. This paper addresses successful recent developments and accomplishments in the application of this optical finishing technology to sequentially larger aperture and thicker sapphire windows to achieve the

  1. Regulatory aspects

    NASA Astrophysics Data System (ADS)

    Stern, Arthur M.

    1986-07-01

    At this time, there is no US legislation that is specifically aimed at regulating the environmental release of genetically engineered organisms or their modified components, either during the research and development stage or during application. There are some statutes, administered by several federal agencies, whose language is broad enough to allow the extension of intended coverage to include certain aspects of biotechnology. The one possible exception is FIFRA, which has already brought about the registration of several natural microbial pesticides but which also has provision for requiring the registration of “strain improved” microbial pesticides. Nevertheless, there may be gaps in coverage even if all pertinent statutes were to be actively applied to the control of environmental release of genetically modified substances. The decision to regulate biotechnology under TSCA was justified, in part, on the basis of its intended role as a gap-filling piece of environmental legislation. The advantage of regulating biotechnology under TSCA is that this statute, unlike others, is concerned with all media of exposure (air, water, soil, sediment, biota) that may pose health and environmental hazards. Experience may show that extending existing legislation to regulate biotechnology is a poor compromise compared to the promulgation of new legislation specifically designed for this purpose. It appears that many other countries are ultimately going to take the latter course to regulate biotechnology.

  2. Development and validation of a computational finite element model of the rabbit upper airway: simulations of mandibular advancement and tracheal displacement.

    PubMed

    Amatoury, Jason; Cheng, Shaokoon; Kairaitis, Kristina; Wheatley, John R; Amis, Terence C; Bilston, Lynne E

    2016-04-01

    The mechanisms leading to upper airway (UA) collapse during sleep are complex and poorly understood. We previously developed an anesthetized rabbit model for studying UA physiology. On the basis of this body of physiological data, we aimed to develop and validate a two-dimensional (2D) computational finite element model (FEM) of the passive rabbit UA and peripharyngeal tissues. Model geometry was reconstructed from a midsagittal computed tomographic image of a representative New Zealand White rabbit, which included major soft (tongue, soft palate, constrictor muscles), cartilaginous (epiglottis, thyroid cartilage), and bony pharyngeal tissues (mandible, hard palate, hyoid bone). Other UA muscles were modeled as linear elastic connections. Initial boundary and contact definitions were defined from anatomy and material properties derived from the literature. Model parameters were optimized to physiological data sets associated with mandibular advancement (MA) and caudal tracheal displacement (TD), including hyoid displacement, which featured with both applied loads. The model was then validated against independent data sets involving combined MA and TD. Model outputs included UA lumen geometry, peripharyngeal tissue displacement, and stress and strain distributions. Simulated MA and TD resulted in UA enlargement and nonuniform increases in tissue displacement, and stress and strain. Model predictions closely agreed with experimental data for individually applied MA, TD, and their combination. We have developed and validated an FEM of the rabbit UA that predicts UA geometry and peripharyngeal tissue mechanical changes associated with interventions known to improve UA patency. The model has the potential to advance our understanding of UA physiology and peripharyngeal tissue mechanics. PMID:26769952

  3. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  4. Computer Lab Configuration.

    ERIC Educational Resources Information Center

    Wodarz, Nan

    2003-01-01

    Describes the layout and elements of an effective school computer lab. Includes configuration, storage spaces, cabling and electrical requirements, lighting, furniture, and computer hardware and peripherals. (PKP)

  5. Curved Beam Computed Tomography based Structural Rigidity Analysis of Bones with Simulated Lytic Defect: A Comparative Study with Finite Element Analysis.

    PubMed

    Oftadeh, R; Karimi, Z; Villa-Camacho, J; Tanck, E; Verdonschot, N; Goebel, R; Snyder, B D; Hashemi, H N; Vaziri, A; Nazarian, A

    2016-01-01

    In this paper, a CT based structural rigidity analysis (CTRA) method that incorporates bone intrinsic local curvature is introduced to assess the compressive failure load of human femur with simulated lytic defects. The proposed CTRA is based on a three dimensional curved beam theory to obtain critical stresses within the human femur model. To test the proposed method, ten human cadaveric femurs with and without simulated defects were mechanically tested under axial compression to failure. Quantitative computed tomography images were acquired from the samples, and CTRA and finite element analysis were performed to obtain the failure load as well as rigidities in both straight and curved cross sections. Experimental results were compared to the results obtained from FEA and CTRA. The failure loads predicated by curved beam CTRA and FEA are in agreement with experimental results. The results also show that the proposed method is an efficient and reliable method to find both the location and magnitude of failure load. Moreover, the results show that the proposed curved CTRA outperforms the regular straight beam CTRA, which ignores the bone intrinsic curvature and can be used as a useful tool in clinical practices. PMID:27585495

  6. Three-dimensional finite element analysis of unilateral mastication in malocclusion cases using cone-beam computed tomography and a motion capture system

    PubMed Central

    2016-01-01

    Purpose Stress distribution and mandible distortion during lateral movements are known to be closely linked to bruxism, dental implant placement, and temporomandibular joint disorder. The present study was performed to determine stress distribution and distortion patterns of the mandible during lateral movements in Class I, II, and III relationships. Methods Five Korean volunteers (one normal, two Class II, and two Class III occlusion cases) were selected. Finite element (FE) modeling was performed using information from cone-beam computed tomographic (CBCT) scans of the subjects’ skulls, scanned images of dental casts, and incisor movement captured by an optical motion-capture system. Results In the Class I and II cases, maximum stress load occurred at the condyle of the balancing side, but, in the Class III cases, the maximum stress was loaded on the condyle of the working side. Maximum distortion was observed on the menton at the midline in every case, regardless of loading force. The distortion was greatest in Class III cases and smallest in Class II cases. Conclusions The stress distribution along and accompanying distortion of a mandible seems to be affected by the anteroposterior position of the mandible. Additionally, 3-D modeling of the craniofacial skeleton using CBCT and an optical laser scanner and reproduction of mandibular movement by way of the optical motion-capture technique used in this study are reliable techniques for investigating the masticatory system. PMID:27127690

  7. VIBA-Lab 3.0: Computer program for simulation and semi-quantitative analysis of PIXE and RBS spectra and 2D elemental maps

    NASA Astrophysics Data System (ADS)

    Orlić, Ivica; Mekterović, Darko; Mekterović, Igor; Ivošević, Tatjana

    2015-11-01

    VIBA-Lab is a computer program originally developed by the author and co-workers at the National University of Singapore (NUS) as an interactive software package for simulation of Particle Induced X-ray Emission and Rutherford Backscattering Spectra. The original program is redeveloped to a VIBA-Lab 3.0 in which the user can perform semi-quantitative analysis by comparing simulated and measured spectra as well as simulate 2D elemental maps for a given 3D sample composition. The latest version has a new and more versatile user interface. It also has the latest data set of fundamental parameters such as Coster-Kronig transition rates, fluorescence yields, mass absorption coefficients and ionization cross sections for K and L lines in a wider energy range than the original program. Our short-term plan is to introduce routine for quantitative analysis for multiple PIXE and XRF excitations. VIBA-Lab is an excellent teaching tool for students and researchers in using PIXE and RBS techniques. At the same time the program helps when planning an experiment and when optimizing experimental parameters such as incident ions, their energy, detector specifications, filters, geometry, etc. By "running" a virtual experiment the user can test various scenarios until the optimal PIXE and BS spectra are obtained and in this way save a lot of expensive machine time.

  8. Investigation and optimization of a finite element simulation of transducer array systems for 3D ultrasound computer tomography with respect to electrical impedance characteristics

    NASA Astrophysics Data System (ADS)

    Kohout, B.; Pirinen, J.; Ruiter, N. V.

    2012-03-01

    The established standard screening method to detect breast cancer is X-ray mammography. However X-ray mammography often has low contrast for tumors located within glandular tissue. A new approach is 3D Ultrasound Computer Tomography (USCT), which is expected to detect small tumors at an early stage. This paper describes the development, improvement and the results of Finite Element Method (FEM) simulations of the Transducer Array System (TAS) used in our 3D USCT. The focus of this work is on researching the influence of meshing and material parameters on the electrical impedance curves. Thereafter, these findings are used to optimize the simulation model. The quality of the simulation was evaluated by comparing simulated impedance characteristics with measured data of the real TAS. The resulting FEM simulation model is a powerful tool to analyze and optimize transducer array systems applied for USCT. With this simulation model, the behavior of TAS for different geometry modifications was researched. It provides a means to understand the acoustical performances inside of any ultrasound transducer represented by its electrical impedance characteristic.

  9. Curved Beam Computed Tomography based Structural Rigidity Analysis of Bones with Simulated Lytic Defect: A Comparative Study with Finite Element Analysis

    PubMed Central

    Oftadeh, R.; Karimi, Z.; Villa-Camacho, J.; Tanck, E.; Verdonschot, N.; Goebel, R.; Snyder, B. D.; Hashemi, H. N.; Vaziri, A.; Nazarian, A.

    2016-01-01

    In this paper, a CT based structural rigidity analysis (CTRA) method that incorporates bone intrinsic local curvature is introduced to assess the compressive failure load of human femur with simulated lytic defects. The proposed CTRA is based on a three dimensional curved beam theory to obtain critical stresses within the human femur model. To test the proposed method, ten human cadaveric femurs with and without simulated defects were mechanically tested under axial compression to failure. Quantitative computed tomography images were acquired from the samples, and CTRA and finite element analysis were performed to obtain the failure load as well as rigidities in both straight and curved cross sections. Experimental results were compared to the results obtained from FEA and CTRA. The failure loads predicated by curved beam CTRA and FEA are in agreement with experimental results. The results also show that the proposed method is an efficient and reliable method to find both the location and magnitude of failure load. Moreover, the results show that the proposed curved CTRA outperforms the regular straight beam CTRA, which ignores the bone intrinsic curvature and can be used as a useful tool in clinical practices. PMID:27585495

  10. Characterization of the deformation behavior of intermediate porosity interconnected Ti foams using micro-computed tomography and direct finite element modeling.

    PubMed

    Singh, R; Lee, P D; Lindley, T C; Kohlhauser, C; Hellmich, C; Bram, M; Imwinkelried, T; Dashwood, R J

    2010-06-01

    Under load-bearing conditions metal-based foam scaffolds are currently the preferred choice as bone/cartilage implants. In this study X-ray micro-computed tomography was used to discretize the three-dimensional structure of a commercial titanium foam used in spinal fusion devices. Direct finite element modeling, continuum micromechanics and analytical models of the foam were employed to characterize the elasto-plastic deformation behavior. These results were validated against experimental measurements, including ultrasound and monotonic and interrupted compression testing. Interrupted compression tests demonstrated localized collapse of pores unfavorably oriented with respect to the loading direction at many isolated locations, unlike the Ashby model, in which pores collapse row by row. A principal component analysis technique was developed to quantify the pore anisotropy which was then related to the yield stress anisotropy, indicating which isolated pores will collapse first. The Gibson-Ashby model was extended to incorporate this anisotropy by considering an orthorhombic, rather than a tetragonal, unit cell. It is worth noting that the natural bone is highly anisotropic and there is a need to develop and characterize anisotropic implants that mimic bone characteristics. PMID:19961958

  11. Theoretical, experimental, and computational aspects of optical property determination of turbid media by using frequency-domain laser infrared photothermal radiometry.

    PubMed

    Nicolaides, L; Chen, Y; Mandelis, A; Vitkin, I A

    2001-10-01

    In this work, the optical and thermal properties of tissuelike materials are measured by using frequency-domain infrared photothermal radiometry. This technique is better suited for quantitative multiparameter optical measurements than the widely used pulsed-laser photothermal radiometry (PPTR) because of the availability of two independent signal channels, amplitude and phase, and the superior signal-to-noise ratio provided by synchronous lock-in detection. A rigorous three-dimensional (3-D) thermal-wave formulation with a 3-D diffuse and coherent photon-density-wave source is applied to data from model phantoms. The combined theoretical, experimental, and computational methodology shows good promise with regard to its analytical ability to measure optical properties of turbid media uniquely, as compared with PPTR, which exhibits uniqueness problems. From data sets obtained by using calibrated test phantoms, the reduced optical scattering and absorption coefficients were found to be within 20% and 10%, respectively, of the values independently derived by using Mie theory and spectrophotometric measurements. PMID:11583272

  12. The individual element test revisited

    NASA Technical Reports Server (NTRS)

    Militello, Carmelo; Felippa, Carlos A.

    1991-01-01

    The subject of the patch test for finite elements retains several unsettled aspects. In particular, the issue of one-element versus multielement tests needs clarification. Following a brief historical review, we present the individual element test (IET) of Bergan and Hanssen in an expanded context that encompasses several important classes of new elements. The relationship of the IET to the multielement forms A, B, and C of the patch test and to the single element test are clarified.

  13. Aspects of emergent symmetries

    NASA Astrophysics Data System (ADS)

    Gomes, Pedro R. S.

    2016-03-01

    These are intended to be review notes on emergent symmetries, i.e. symmetries which manifest themselves in specific sectors of energy in many systems. The emphasis is on the physical aspects rather than computation methods. We include some background material and go through more recent problems in field theory, statistical mechanics and condensed matter. These problems illustrate how some important symmetries, such as Lorentz invariance and supersymmetry, usually believed to be fundamental, can arise naturally in low-energy regimes of systems involving a large number of degrees of freedom. The aim is to discuss how these examples could help us to face other complex and fundamental problems.

  14. Revolution in Orthodontics: Finite element analysis

    PubMed Central

    Singh, Johar Rajvinder; Kambalyal, Prabhuraj; Jain, Megha; Khandelwal, Piyush

    2016-01-01

    Engineering has not only developed in the field of medicine but has also become quite established in the field of dentistry, especially Orthodontics. Finite element analysis (FEA) is a computational procedure to calculate the stress in an element, which performs a model solution. This structural analysis allows the determination of stress resulting from external force, pressure, thermal change, and other factors. This method is extremely useful for indicating mechanical aspects of biomaterials and human tissues that can hardly be measured in vivo. The results obtained can then be studied using visualization software within the finite element method (FEM) to view a variety of parameters, and to fully identify implications of the analysis. This is a review to show the applications of FEM in Orthodontics. It is extremely important to verify what the purpose of the study is in order to correctly apply FEM. PMID:27114948

  15. Computer Applications in Geotechnical Engineering (CAGE) and Geotechnical aspects of the Computer-Aided Structural Engineering (G-CASE) projects. User's guide. UTEXAS2 slope-stability package. Volume 2: Theory

    NASA Astrophysics Data System (ADS)

    Edris, Earl V., Jr.; Wright, Stephen G.

    1989-02-01

    This report describes the theory of a two-dimensional slope-stability analysis and covers the mechanics of limit-equilibrium procedures that utilize the wedge and slices methods. The mechanics of the four different procedures contained in the computer program UTEXAS2 (University of Texas Analysis of Slopes - version 2) are discussed as well as sources of potential errors. The limit-equilibrium equations and the calculation procedures used in the program are described in an appendix.

  16. Aspects of Plant Intelligence

    PubMed Central

    TREWAVAS, ANTHONY

    2003-01-01

    Intelligence is not a term commonly used when plants are discussed. However, I believe that this is an omission based not on a true assessment of the ability of plants to compute complex aspects of their environment, but solely a reflection of a sessile lifestyle. This article, which is admittedly controversial, attempts to raise many issues that surround this area. To commence use of the term intelligence with regard to plant behaviour will lead to a better understanding of the complexity of plant signal transduction and the discrimination and sensitivity with which plants construct images of their environment, and raises critical questions concerning how plants compute responses at the whole‐plant level. Approaches to investigating learning and memory in plants will also be considered. PMID:12740212

  17. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  18. Biomechanical effects of teriparatide in women with osteoporosis treated previously with alendronate and risedronate: results from quantitative computed tomography-based finite element analysis of the vertebral body.

    PubMed

    Chevalier, Yan; Quek, Evelyn; Borah, Babul; Gross, Gary; Stewart, John; Lang, Thomas; Zysset, Philippe

    2010-01-01

    Previous antiresorptive treatment may influence the anabolic response to teriparatide. The OPTAMISE (Open-label Study to Determine How Prior Therapy with Alendronate or Risedronate in Postmenopausal Women with Osteoporosis Influences the Clinical Effectiveness of Teriparatide) study reported greater increases in biochemical markers of bone turnover and volumetric bone mineral density (BMD) when 12 months of teriparatide treatment was preceded by 2 years or more of risedronate versus alendronate treatment. The objective of this study was to use quantitative computed tomography (CT)-based nonlinear finite element modeling to evaluate how prior therapy with alendronate or risedronate in postmenopausal women with osteoporosis influences the biomechanical effectiveness of teriparatide. Finite element models of the L1 vertebra were created from quantitative CT scans, acquired before and after 12 months of therapy with teriparatide, from 171 patients from the OPTAMISE study. These models were subjected to uniaxial compression. Total BMD-derived bone volume fraction (BV/TV(d), i.e., bone volume [BV]/total volume [TV]), estimated from quantitative CT-based volumetric BMD, vertebral stiffness, and failure load (strength) were calculated for each time measurement point. The results of this study demonstrated that 12 months of treatment with teriparatide following prior treatment with either risedronate or alendronate increased BMD-derived BV/TV(d), the predicted vertebral stiffness, and failure load. However, the effects of teriparatide were more pronounced in patients treated previously with risedronate, which is consistent with the findings of the OPTAMISE study. The mean (+/-standard error) increase in stiffness was greater in the prior risedronate group than the prior alendronate group (24.6+/-3.2% versus 14.4+/-2.8%, respectively; p=0.0073). Similarly, vertebral failure load increased by 27.2+/-3.5% in the prior risedronate group versus 15.3+/-3.1% in the prior

  19. Discordance between Prevalent Vertebral Fracture and Vertebral Strength Estimated by the Finite Element Method Based on Quantitative Computed Tomography in Patients with Type 2 Diabetes Mellitus

    PubMed Central

    2015-01-01

    Background Bone fragility is increased in patients with type 2 diabetes mellitus (T2DM), but a useful method to estimate bone fragility in T2DM patients is lacking because bone mineral density alone is not sufficient to assess the risk of fracture. This study investigated the association between prevalent vertebral fractures (VFs) and the vertebral strength index estimated by the quantitative computed tomography-based nonlinear finite element method (QCT-based nonlinear FEM) using multi-detector computed tomography (MDCT) for clinical practice use. Research Design and Methods A cross-sectional observational study was conducted on 54 postmenopausal women and 92 men over 50 years of age, all of whom had T2DM. The vertebral strength index was compared in patients with and without VFs confirmed by spinal radiographs. A standard FEM procedure was performed with the application of known parameters for the bone material properties obtained from nondiabetic subjects. Results A total of 20 women (37.0%) and 39 men (42.4%) with VFs were identified. The vertebral strength index was significantly higher in the men than in the women (P<0.01). Multiple regression analysis demonstrated that the vertebral strength index was significantly and positively correlated with the spinal bone mineral density (BMD) and inversely associated with age in both genders. There were no significant differences in the parameters, including the vertebral strength index, between patients with and without VFs. Logistic regression analysis adjusted for age, spine BMD, BMI, HbA1c, and duration of T2DM did not indicate a significant relationship between the vertebral strength index and the presence of VFs. Conclusion The vertebral strength index calculated by QCT-based nonlinear FEM using material property parameters obtained from nondiabetic subjects, whose risk of fracture is lower than that of T2DM patients, was not significantly associated with bone fragility in patients with T2DM. This discordance

  20. RegSEM: a versatile code based on the spectral element method to compute seismic wave propagation at the regional scale

    NASA Astrophysics Data System (ADS)

    Cupillard, Paul; Delavaud, Elise; Burgos, Gaël.; Festa, Geatano; Vilotte, Jean-Pierre; Capdeville, Yann; Montagner, Jean-Paul

    2012-03-01

    The spectral element method, which provides an accurate solution of the elastodynamic problem in heterogeneous media, is implemented in a code, called RegSEM, to compute seismic wave propagation at the regional scale. By regional scale we here mean distances ranging from about 1 km (local scale) to 90° (continental scale). The advantage of RegSEM resides in its ability to accurately take into account 3-D discontinuities such as the sediment-rock interface and the Moho. For this purpose, one version of the code handles local unstructured meshes and another version manages continental structured meshes. The wave equation can be solved in any velocity model, including anisotropy and intrinsic attenuation in the continental version. To validate the code, results from RegSEM are compared to analytical and semi-analytical solutions available in simple cases (e.g. explosion in PREM, plane wave in a hemispherical basin). In addition, realistic simulations of an earthquake in different tomographic models of Europe are performed. All these simulations show the great flexibility of the code and point out the large influence of the shallow layers on the propagation of seismic waves at the regional scale. RegSEM is written in Fortran 90 but it also contains a couple of C routines. It is an open-source software which runs on distributed memory architectures. It can give rise to interesting applications, such as testing regional tomographic models, developing tomography using either passive (i.e. noise correlations) or active (i.e. earthquakes) data, or improving our knowledge on effects linked with sedimentary basins.

  1. RETSCP: A computer program for analysis of rocket engine thermal strains with cyclic plasticity

    NASA Technical Reports Server (NTRS)

    Miller, R. W.

    1974-01-01

    A computer program, designated RETSCP, for the analysis of Rocket Engine Thermal Strain with Cyclic Plasticity is described. RETSCP is a finite element program which employs a three dimensional isoparametric element. The program treats elasto-plastic strain cycling including the effects of thermal and pressure loads and temperature dependent material properties. Theoretical aspects of the finite element method are discussed and the program logic is described. A RETSCP User's Manual is presented including sample case results.

  2. A direct element resequencing procedure

    NASA Technical Reports Server (NTRS)

    Akin, J. E.; Fulford, R. E.

    1978-01-01

    Element by element frontal solution algorithms are utilized in many of the existing finite element codes. The overall computational efficiency of this type of procedure is directly related to the element data input sequence. Thus, it is important to have a pre-processor which will resequence these data so as to reduce the element wavefronts to be encountered in the solution algorithm. A direct element resequencing algorithm is detailed for reducing element wavefronts. It also generates computational by products that can be utilized in pre-front calculations and in various post-processors. Sample problems are presented and compared with other algorithms.

  3. Exercises in Molecular Computing

    PubMed Central

    2014-01-01

    Conspectus The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word “computer” now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem–loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is

  4. Computational results for parallel unstructured mesh computations

    SciTech Connect

    Jones, M.T.; Plassmann, P.E.

    1994-12-31

    The majority of finite element models in structural engineering are composed of unstructured meshes. These unstructured meshes are often very large and require significant computational resources; hence they are excellent candidates for massively parallel computation. Parallel solution of the sparse matrices that arise from such meshes has been studied heavily, and many good algorithms have been developed. Unfortunately, many of the other aspects of parallel unstructured mesh computation have gone largely ignored. The authors present a set of algorithms that allow the entire unstructured mesh computation process to execute in parallel -- including adaptive mesh refinement, equation reordering, mesh partitioning, and sparse linear system solution. They briefly describe these algorithms and state results regarding their running-time and performance. They then give results from the 512-processor Intel DELTA for a large-scale structural analysis problem. These results demonstrate that the new algorithms are scalable and efficient. The algorithms are able to achieve up to 2.2 gigaflops for this unstructured mesh problem.

  5. Connectivity Measures in EEG Microstructural Sleep Elements

    PubMed Central

    Sakellariou, Dimitris; Koupparis, Andreas M.; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K.

    2016-01-01

    During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an “EEG-element connectivity” methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the

  6. Custom-designed orthopedic implants evaluated using finite element analysis of patient-specific computed tomography data: femoral-component case study

    PubMed Central

    Harrysson, Ola LA; Hosni, Yasser A; Nayfeh, Jamal F

    2007-01-01

    Background Conventional knee and hip implant systems have been in use for many years with good success. However, the custom design of implant components based on patient-specific anatomy has been attempted to overcome existing shortcomings of current designs. The longevity of cementless implant components is highly dependent on the initial fit between the bone surface and the implant. The bone-implant interface design has historically been limited by the surgical tools and cutting guides available; and the cost of fabricating custom-designed implant components has been prohibitive. Methods This paper describes an approach where the custom design is based on a Computed Tomography scan of the patient's joint. The proposed design will customize both the articulating surface and the bone-implant interface to address the most common problems found with conventional knee-implant components. Finite Element Analysis is used to evaluate and compare the proposed design of a custom femoral component with a conventional design. Results The proposed design shows a more even stress distribution on the bone-implant interface surface, which will reduce the uneven bone remodeling that can lead to premature loosening. Conclusion The proposed custom femoral component design has the following advantages compared with a conventional femoral component. (i) Since the articulating surface closely mimics the shape of the distal femur, there is no need for resurfacing of the patella or gait change. (ii) Owing to the resulting stress distribution, bone remodeling is even and the risk of premature loosening might be reduced. (iii) Because the bone-implant interface can accommodate anatomical abnormalities at the distal femur, the need for surgical interventions and fitting of filler components is reduced. (iv) Given that the bone-implant interface is customized, about 40% less bone must be removed. The primary disadvantages are the time and cost required for the design and the possible need

  7. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  8. Aspects, Wrappers and Events

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.

    2003-01-01

    This viewgraph presentation provides information on Object Infrastructure Framework (OIF), an Aspect-Oriented Programming (AOP) system. The presentation begins with an introduction to the difficulties and requirements of distributed computing, including functional and non-functional requirements (ilities). The architecture of Distributed Object Technology includes stubs, proxies for implementation objects, and skeletons, proxies for client applications. The key OIF ideas (injecting behavior, annotated communications, thread contexts, and pragma) are discussed. OIF is an AOP mechanism; AOP is centered on: 1) Separate expression of crosscutting concerns; 2) Mechanisms to weave the separate expressions into a unified system. AOP is software engineering technology for separately expressing systematic properties while nevertheless producing running systems that embody these properties.

  9. Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Thomas, James

    2008-01-01

    Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications

  10. On the influence of the surface and body tides on the motion of a satellite. [earth geophysical aspects of orbit perturbations

    NASA Technical Reports Server (NTRS)

    Musen, P.

    1973-01-01

    Some geophysical aspects of the tidal perturbations in the motion of artificial satellites are investigated and a system of formulas is developed that is convenient for computation of the tidal effects in the elements using a step-by-step numerical integration.

  11. Aspect-Oriented Subprogram Synthesizes UML Sequence Diagrams

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2006-01-01

    The Rational Sequence computer program described elsewhere includes a subprogram that utilizes the capability for aspect-oriented programming when that capability is present. This subprogram is denoted the Rational Sequence (AspectJ) component because it uses AspectJ, which is an extension of the Java programming language that introduces aspect-oriented programming techniques into the language

  12. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  13. Dedicated finite elements for electrode thin films on quartz resonators.

    PubMed

    Srivastava, Sonal A; Yong, Yook-Kong; Tanaka, Masako; Imai, Tsutomu

    2008-08-01

    The accuracy of the finite element analysis for thickness shear quartz resonators is a function of the mesh resolution; the finer the mesh resolution, the more accurate the finite element solution. A certain minimum number of elements are required in each direction for the solution to converge. This places a high demand on memory for computation, and often the available memory is insufficient. Typically the thickness of the electrode films is very small compared with the thickness of the resonator itself; as a result, electrode elements have very poor aspect ratios, and this is detrimental to the accuracy of the result. In this paper, we propose special methods to model the electrodes at the crystal interface of an AT cut crystal. This reduces the overall problem size and eliminates electrode elements having poor aspect ratios. First, experimental data are presented to demonstrate the effects of electrode film boundary conditions on the frequency-temperature curves of an AT cut plate. Finite element analysis is performed on a mesh representing the resonator, and the results are compared for testing the accuracy of the analysis itself and thus validating the results of analysis. Approximations such as lumping and Guyan reduction are then used to model the electrode thin films at the electrode interface and their results are studied. In addition, a new approximation called merging is proposed to model electrodes at the electrode interface. PMID:18986913

  14. Computer Conferences: Success or Failure?

    ERIC Educational Resources Information Center

    Phillips, Amy Friedman

    This examination of the aspects of computers and computer conferencing that can lead to their successful design and utilization focuses on task-related functions and emotional interactions in human communication and human-computer interactions. Such aspects of computer conferences as procedures, problems, advantages, and suggestions for future…

  15. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  16. It's elemental

    NASA Astrophysics Data System (ADS)

    The Periodic Table of the elements will now have to be updated. An international team of researchers has added element 110 to the Earth's armory of elements. Though short-lived—of the order of microseconds, element 110 bottoms out the list as the heaviest known element on the planet. Scientists at the Heavy Ion Research Center in Darmstadt, Germany, made the 110-proton element by colliding a lead isotope with nickel atoms. The element, which is yet to be named, has an atomic mass of 269.

  17. Cohesive Elements for Shells

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; Turon, Albert

    2007-01-01

    A cohesive element for shell analysis is presented. The element can be used to simulate the initiation and growth of delaminations between stacked, non-coincident layers of shell elements. The procedure to construct the element accounts for the thickness offset by applying the kinematic relations of shell deformation to transform the stiffness and internal force of a zero-thickness cohesive element such that interfacial continuity between the layers is enforced. The procedure is demonstrated by simulating the response and failure of the Mixed Mode Bending test and a skin-stiffener debond specimen. In addition, it is shown that stacks of shell elements can be used to create effective models to predict the inplane and delamination failure modes of thick components. The results indicate that simple shell models can retain many of the necessary predictive attributes of much more complex 3D models while providing the computational efficiency that is necessary for design.

  18. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 2, User`s manual

    SciTech Connect

    Gartling, D.K.

    1996-05-01

    User instructions are given for the finite element, electromagnetics program, TORO II. The theoretical background and numerical methods used in the program are documented in SAND95-2472. The present document also describes a number of example problems that have been analyzed with the code and provides sample input files for typical simulations. 20 refs., 34 figs., 3 tabs.

  19. Lubrication of rolling element bearings

    NASA Technical Reports Server (NTRS)

    Parker, R. J.

    1980-01-01

    This paper is a broad survey of the lubrication of rolling-element bearings. Emphasis is on the critical design aspects related to speed, temperature, and ambient pressure environment. Types of lubrication including grease, jets, mist, wick, and through-the-race are discussed. The paper covers the historical development, present state of technology, and the future problems of rolling-element bearing lubrication.

  20. Quantum Computing

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias

    2013-03-01

    Quantum mechanics plays a crucial role in many day-to-day products, and has been successfully used to explain a wide variety of observations in Physics. While some quantum effects such as tunneling limit the degree to which modern CMOS devices can be scaled to ever reducing dimensions, others may potentially be exploited to build an entirely new computing architecture: The quantum computer. In this talk I will review several basic concepts of a quantum computer. Why quantum computing and how do we do it? What is the status of several (but not all) approaches towards building a quantum computer, including IBM's approach using superconducting qubits? And what will it take to build a functional machine? The promise is that a quantum computer could solve certain interesting computational problems such as factoring using exponentially fewer computational steps than classical systems. Although the most sophisticated modern quantum computing experiments to date do not outperform simple classical computations, it is increasingly becoming clear that small scale demonstrations with as many as 100 qubits are beginning to be within reach over the next several years. Such a demonstration would undoubtedly be a thrilling feat, and usher in a new era of controllably testing quantum mechanics or quantum computing aspects. At the minimum, future demonstrations will shed much light on what lies ahead.

  1. Legal aspects of satellite teleconferencing

    NASA Technical Reports Server (NTRS)

    Smith, D. D.

    1971-01-01

    The application of satellite communications for teleconferencing purposes is discussed. The legal framework within which such a system or series of systems could be developed is considered. The analysis is based on: (1) satellite teleconferencing regulation, (2) the options available for such a system, (3) regulatory alternatives, and (4) ownership and management aspects. The system is designed to provide a capability for professional education, remote medical diagnosis, business conferences, and computer techniques.

  2. Proceedings of transuranium elements

    SciTech Connect

    Not Available

    1992-01-01

    The identification of the first synthetic elements was established by chemical evidence. Conclusive proof of the synthesis of the first artificial element, technetium, was published in 1937 by Perrier and Segre. An essential aspect of their achievement was the prediction of the chemical properties of element 43, which had been missing from the periodic table and which was expected to have properties similar to those of manganese and rhenium. The discovery of other artificial elements, astatine and francium, was facilitated in 1939-1940 by the prediction of their chemical properties. A little more than 50 years ago, in the spring of 1940, Edwin McMillan and Philip Abelson synthesized element 93, neptunium, and confirmed its uniqueness by chemical means. On August 30, 1940, Glenn Seaborg, Arthur Wahl, and the late Joseph Kennedy began their neutron irradiations of uranium nitrate hexahydrate. A few months later they synthesized element 94, later named plutonium, by observing the alpha particles emitted from uranium oxide targets that had been bombarded with deuterons. Shortly thereafter they proved that is was the second transuranium element by establishing its unique oxidation-reduction behavior. The symposium honored the scientists and engineers whose vision and dedication led to the discovery of the transuranium elements and to the understanding of the influence of 5f electrons on their electronic structure and bonding. This volume represents a record of papers presented at the symposium.

  3. Application of a finite element method for computing grazing incidence wave structure in an impedance tube - Comparison with experiment. [for duct liner aeroacoustic design

    NASA Technical Reports Server (NTRS)

    Lester, H. C.; Parrott, T. L.

    1979-01-01

    The acoustic performance of a liner specimen, in a grazing incidence impedance tube, is analyzed using a finite element method. The liner specimen was designed to be a locally reacting, two-degree-of-freedom type with the resistance and reactance provided by perforated facesheets and compartmented cavities. Measured and calculated wave structures are compared for both normal and grazing incidence from 0.3 to 1.2 kHz. A finite element algorithm was incorporated into an optimization loop in order to predict liner grazing incidence impedance from measured SWR and null position data. Results suggest that extended reaction effects may have been responsible for differences between normal and grazing incidence impedance estimates.

  4. Thoracic response targets for a computational model: a hierarchical approach to assess the biofidelity of a 50th-percentile occupant male finite element model.

    PubMed

    Poulard, David; Kent, Richard W; Kindig, Matthew; Li, Zuoping; Subit, Damien

    2015-05-01

    Current finite element human thoracic models are typically evaluated against a limited set of loading conditions; this is believed to limit their capability to predict accurate responses. In this study, a 50th-percentile male finite element model (GHBMC v4.1) was assessed under various loading environments (antero-posterior rib bending, point loading of the denuded ribcage, omnidirectional pendulum impact and table top) through a correlation metric tool (CORA) based on linearly independent signals. The load cases were simulated with the GHBMC model and response corridors were developed from published experimental data. The model was found to be in close agreement with the experimental data both qualitatively and quantitatively (CORA ratings above 0.75) and the response of the thorax was overall deemed biofidelic. This study also provides relevant corridors and an objective rating framework that can be used for future evaluation of thoracic models. PMID:25681717

  5. JAC2D: A two-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

    SciTech Connect

    Biffle, J.H.; Blanford, M.L.

    1994-05-01

    JAC2D is a two-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equations. The method is implemented in a two-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. A four-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic/plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

  6. JAC3D -- A three-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

    SciTech Connect

    Biffle, J.H.

    1993-02-01

    JAC3D is a three-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equation. The method is implemented in a three-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. An eight-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic-plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

  7. Verification and benchmarking of MAGNUM-2D: a finite element computer code for flow and heat transfer in fractured porous media

    SciTech Connect

    Eyler, L.L.; Budden, M.J.

    1985-03-01

    The objective of this work is to assess prediction capabilities and features of the MAGNUM-2D computer code in relation to its intended use in the Basalt Waste Isolation Project (BWIP). This objective is accomplished through a code verification and benchmarking task. Results are documented which support correctness of prediction capabilities in areas of intended model application. 10 references, 43 figures, 11 tables.

  8. Cohesive Zone Model User Element

    Energy Science and Technology Software Center (ESTSC)

    2007-04-17

    Cohesive Zone Model User Element (CZM UEL) is an implementation of a Cohesive Zone Model as an element for use in finite element simulations. CZM UEL computes a nodal force vector and stiffness matrix from a vector of nodal displacements. It is designed for structural analysts using finite element software to predict crack initiation, crack propagation, and the effect of a crack on the rest of a structure.

  9. Injector element characterization methodology

    NASA Technical Reports Server (NTRS)

    Cox, George B., Jr.

    1988-01-01

    Characterization of liquid rocket engine injector elements is an important part of the development process for rocket engine combustion devices. Modern nonintrusive instrumentation for flow velocity and spray droplet size measurement, and automated, computer-controlled test facilities allow rapid, low-cost evaluation of injector element performance and behavior. Application of these methods in rocket engine development, paralleling their use in gas turbine engine development, will reduce rocket engine development cost and risk. The Alternate Turbopump (ATP) Hot Gas Systems (HGS) preburner injector elements were characterized using such methods, and the methodology and some of the results obtained will be shown.

  10. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  11. ALGEBRA: a computer program that algebraically manipulates finite element output data. [In extended FORTRAN for CDC 7600 or CYBER 76 only

    SciTech Connect

    Richgels, M A; Biffle, J H

    1980-09-01

    ALGEBRA is a program that allows the user to process output data from finite-element analysis codes before they are sent to plotting routines. These data take the form of variable values (stress, strain, and velocity components, etc.) on a tape that is both the output tape from the analyses code and the input tape to ALGEBRA. The ALGEBRA code evaluates functions of these data and writes the function values on an output tape that can be used as input to plotting routines. Convenient input format and error detection capabilities aid the user in providing ALGEBRA with the functions to be evaluated. 1 figure.

  12. NON-CONFORMING FINITE ELEMENTS; MESH GENERATION, ADAPTIVITY AND RELATED ALGEBRAIC MULTIGRID AND DOMAIN DECOMPOSITION METHODS IN MASSIVELY PARALLEL COMPUTING ENVIRONMENT

    SciTech Connect

    Lazarov, R; Pasciak, J; Jones, J

    2002-02-01

    Construction, analysis and numerical testing of efficient solution techniques for solving elliptic PDEs that allow for parallel implementation have been the focus of the research. A number of discretization and solution methods for solving second order elliptic problems that include mortar and penalty approximations and domain decomposition methods for finite elements and finite volumes have been investigated and analyzed. Techniques for parallel domain decomposition algorithms in the framework of PETC and HYPRE have been studied and tested. Hierarchical parallel grid refinement and adaptive solution methods have been implemented and tested on various model problems. A parallel code implementing the mortar method with algebraically constructed multiplier spaces was developed.

  13. From Finite Element Meshes to Clouds of Points: A Review of Methods for Generation of Computational Biomechanics Models for Patient-Specific Applications.

    PubMed

    Wittek, Adam; Grosland, Nicole M; Joldes, Grand Roman; Magnotta, Vincent; Miller, Karol

    2016-01-01

    It has been envisaged that advances in computing and engineering technologies could extend surgeons' ability to plan and carry out surgical interventions more accurately and with less trauma. The progress in this area depends crucially on the ability to create robustly and rapidly patient-specific biomechanical models. We focus on methods for generation of patient-specific computational grids used for solving partial differential equations governing the mechanics of the body organs. We review state-of-the-art in this area and provide suggestions for future research. To provide a complete picture of the field of patient-specific model generation, we also discuss methods for identifying and assigning patient-specific material properties of tissues and boundary conditions. PMID:26424475

  14. Tracking and computing

    SciTech Connect

    Niederer, J.

    1983-01-01

    This note outlines several ways in which large scale simulation computing and programming support may be provided to the SSC design community. One aspect of the problem is getting supercomputer power without the high cost and long lead times of large scale institutional computing. Another aspect is the blending of modern programming practices with more conventional accelerator design programs in ways that do not also swamp designers with the details of complicated computer technology.

  15. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  16. Mobile genetic elements: in silico, in vitro, in vivo.

    PubMed

    Arkhipova, Irina R; Rice, Phoebe A

    2016-03-01

    Mobile genetic elements (MGEs), also called transposable elements (TEs), represent universal components of most genomes and are intimately involved in nearly all aspects of genome organization, function and evolution. However, there is currently a gap between the fast pace of TE discovery in silico, driven by the exponential growth of comparative genomic studies, and a limited number of experimental models amenable to more traditional in vitro and in vivo studies of structural, mechanistic and regulatory properties of diverse MGEs. Experimental and computational scientists came together to bridge this gap at a recent conference, 'Mobile Genetic Elements: in silico, in vitro, in vivo', held at the Marine Biological Laboratory (MBL) in Woods Hole, MA, USA. PMID:26822117

  17. Product Aspect Clustering by Incorporating Background Knowledge for Opinion Mining

    PubMed Central

    Chen, Yiheng; Zhao, Yanyan; Qin, Bing; Liu, Ting

    2016-01-01

    Product aspect recognition is a key task in fine-grained opinion mining. Current methods primarily focus on the extraction of aspects from the product reviews. However, it is also important to cluster synonymous extracted aspects into the same category. In this paper, we focus on the problem of product aspect clustering. The primary challenge is to properly cluster and generalize aspects that have similar meanings but different representations. To address this problem, we learn two types of background knowledge for each extracted aspect based on two types of effective aspect relations: relevant aspect relations and irrelevant aspect relations, which describe two different types of relationships between two aspects. Based on these two types of relationships, we can assign many relevant and irrelevant aspects into two different sets as the background knowledge to describe each product aspect. To obtain abundant background knowledge for each product aspect, we can enrich the available information with background knowledge from the Web. Then, we design a hierarchical clustering algorithm to cluster these aspects into different groups, in which aspect similarity is computed using the relevant and irrelevant aspect sets for each product aspect. Experimental results obtained in both camera and mobile phone domains demonstrate that the proposed product aspect clustering method based on two types of background knowledge performs better than the baseline approach without the use of background knowledge. Moreover, the experimental results also indicate that expanding the available background knowledge using the Web is feasible. PMID:27561001

  18. Product Aspect Clustering by Incorporating Background Knowledge for Opinion Mining.

    PubMed

    Chen, Yiheng; Zhao, Yanyan; Qin, Bing; Liu, Ting

    2016-01-01

    Product aspect recognition is a key task in fine-grained opinion mining. Current methods primarily focus on the extraction of aspects from the product reviews. However, it is also important to cluster synonymous extracted aspects into the same category. In this paper, we focus on the problem of product aspect clustering. The primary challenge is to properly cluster and generalize aspects that have similar meanings but different representations. To address this problem, we learn two types of background knowledge for each extracted aspect based on two types of effective aspect relations: relevant aspect relations and irrelevant aspect relations, which describe two different types of relationships between two aspects. Based on these two types of relationships, we can assign many relevant and irrelevant aspects into two different sets as the background knowledge to describe each product aspect. To obtain abundant background knowledge for each product aspect, we can enrich the available information with background knowledge from the Web. Then, we design a hierarchical clustering algorithm to cluster these aspects into different groups, in which aspect similarity is computed using the relevant and irrelevant aspect sets for each product aspect. Experimental results obtained in both camera and mobile phone domains demonstrate that the proposed product aspect clustering method based on two types of background knowledge performs better than the baseline approach without the use of background knowledge. Moreover, the experimental results also indicate that expanding the available background knowledge using the Web is feasible. PMID:27561001

  19. Trace element emissions

    SciTech Connect

    Benson, S.A.; Erickson, T.A.; Steadman, E.N.; Zygarlicke, C.J.; Hauserman, W.B.; Hassett, D.J.

    1994-10-01

    The Energy & Environmental Research Center (EERC) is carrying out an investigation that will provide methods to predict the fate of selected trace elements in integrated gasification combined cycle (IGCC) and integrated gasification fuel cell (IGFC) systems to aid in the development of methods to control the emission of trace elements determined to be air toxics. The goal of this project is to identify the effects of critical chemical and physical transformations associated with trace element behavior in IGCC and IGFC systems. The trace elements included in this project are arsenic, chromium, cadmium, mercury, nickel, selenium, and lead. The research seeks to identify and fill, experimentally and/or theoretically, data gaps that currently exist on the fate and composition of trace elements. The specific objectives are to (1) review the existing literature to identify the type and quantity of trace elements from coal gasification systems, (2) perform laboratory-scale experimentation and computer modeling to enable prediction of trace element emissions, and (3) identify methods to control trace element emissions.

  20. The Coupled Spectral Element/Normal Mode Method: Application to the Testing of Several Approximations Based on Normal Mode Theory for the Computation of Seismograms in a Realistic 3D Earth.

    NASA Astrophysics Data System (ADS)

    Capdeville, Y.; Gung, Y.; Romanowicz, B.

    2002-12-01

    The spectral element method (SEM) has recently been adapted successfully for global spherical earth wave propagation applications. Its advantage is that it provides a way to compute exact seismograms in a 3D earth, without restrictions on the size or wavelength of lateral heterogeneity at any depth, and can handle diffraction and other interactions with major structural boundaries. Its disadvantage is that it is computationally heavy. In order to partly address this drawback, a coupled SEM/normal mode method was developed (Capdeville et al., 2000). This enables us to more efficiently compute bodywave seismograms to realistically short periods (10s or less). In particular, the coupled SEM/normal mode method is a powerful tool to test the validity of some analytical approximations that are currently used in global waveform tomography, and that are considerably faster computationally. Here, we focus on several approximations based on normal mode perturbation theory: the classical "path-average approximation" (PAVA) introduced by Woodhouse and Dziewonski (1984) and well suited for fundamental mode surface waves (1D sensitivity kernels); the non-linear asymptotic coupling theory (NACT), which introduces coupling between mode branches and 2D kernels in the vertical plane containing the source and the receiver (Li and Tanimoto, 1993; Li and Romanowicz, 1995); an extension of NACT which includes out of plane focusing terms computed asymptotically (e.g. Romanowicz, 1987) and introduces 3D kernels; we also consider first order perturbation theory without asymptotic approximations, such as developed for example by Dahlen et al. (2000). We present the results of comparisons of realistic seismograms for different models of heterogeneity, varying the strength and sharpness of the heterogeneity and its location in depth in the mantle. We discuss the consequences of different levels of approximations on our ability to resolve 3D heterogeneity in the earth's mantle.

  1. Finite-element-based design tool for smart composite structures

    NASA Astrophysics Data System (ADS)

    Koko, Tamunoiyala S.; Orisamolu, Irewole R.; Smith, Malcolm J.; Akpan, Unyime O.

    1997-06-01

    This paper presents an integrated finite element-control methodology for the design/analysis of smart composite structures. The method forms part of an effort to develop an integrated computational tool that includes finite element modeling; control algorithms; and deterministic, fuzzy and probabilistic optimization and integrity assessment of the structures and control systems. The finite element analysis is based on a 20 node thermopiezoelectric composite element for modeling the composite structure with surface bonded piezoelectric sensors and actuators; and control is based on the linear quadratic regulator and the independent modal space control methods. The method has been implemented in a computer code called SMARTCOM. Several example problems have been used to verify various aspects of the formulations and the analysis results from the present study compare well against other numerical or experimental results. Being based on the finite element method, the present formation can be conveniently used for the analysis and design of smart composite structures with complex geometrical configurations and loadings.

  2. Progressive Damage Analysis of Laminated Composite (PDALC) (A Computational Model Implemented in the NASA COMET Finite Element Code). 2.0

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.; Lo, David C.; Allen, David H.

    1998-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged damage variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete listing of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occurs during the loading history. Residual strength predictions made with this information compared favorably with experimental measurements.

  3. How to determine spiral bevel gear tooth geometry for finite element analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1991-01-01

    An analytical method was developed to determine gear tooth surface coordinates of face milled spiral bevel gears. The method combines the basic gear design parameters with the kinematical aspects for spiral bevel gear manufacturing. A computer program was developed to calculate the surface coordinates. From this data a 3-D model for finite element analysis can be determined. Development of the modeling method and an example case are presented.

  4. In silico selection of an aptamer to estrogen receptor alpha using computational docking employing estrogen response elements as aptamer-alike molecules

    PubMed Central

    Ahirwar, Rajesh; Nahar, Smita; Aggarwal, Shikha; Ramachandran, Srinivasan; Maiti, Souvik; Nahar, Pradip

    2016-01-01

    Aptamers, the chemical-antibody substitute to conventional antibodies, are primarily discovered through SELEX technology involving multi-round selections and enrichment. Circumventing conventional methodology, here we report an in silico selection of aptamers to estrogen receptor alpha (ERα) using RNA analogs of human estrogen response elements (EREs). The inverted repeat nature of ERE and the ability to form stable hairpins were used as criteria to obtain aptamer-alike sequences. Near-native RNA analogs of selected single stranded EREs were modelled and their likelihood to emerge as ERα aptamer was examined using AutoDock Vina, HADDOCK and PatchDock docking. These in silico predictions were validated by measuring the thermodynamic parameters of ERα -RNA interactions using isothermal titration calorimetry. Based on the in silico and in vitro results, we selected a candidate RNA (ERaptR4; 5′-GGGGUCAAGGUGACCCC-3′) having a binding constant (Ka) of 1.02 ± 0.1 × 108 M−1 as an ERα-aptamer. Target-specificity of the selected ERaptR4 aptamer was confirmed through cytochemistry and solid-phase immunoassays. Furthermore, stability analyses identified ERaptR4 resistant to serum and RNase A degradation in presence of ERα. Taken together, an efficient ERα-RNA aptamer is identified using a non-SELEX procedure of aptamer selection. The high-affinity and specificity can be utilized in detection of ERα in breast cancer and related diseases. PMID:26899418

  5. Elemental health

    SciTech Connect

    Tonneson, L.C.

    1997-01-01

    Trace elements used in nutritional supplements and vitamins are discussed in the article. Relevant studies are briefly cited regarding the health effects of selenium, chromium, germanium, silicon, zinc, magnesium, silver, manganese, ruthenium, lithium, and vanadium. The toxicity and food sources are listed for some of the elements. A brief summary is also provided of the nutritional supplements market.

  6. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  7. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  8. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  9. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  10. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  11. A Vectorial Model to Compute Terrain Parameters, Local and Remote Sheltering, Scattering and Albedo using TIN Domains for Hydrologic Modeling.

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.

    2015-12-01

    Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.

  12. Psychosomatic Aspects of Cancer: An Overview.

    ERIC Educational Resources Information Center

    Murray, John B.

    1980-01-01

    It is suggested in this literature review on the psychosomatic aspects of cancer that psychoanalytic interpretations which focused on intrapsychic elements have given way to considerations of rehabilitation and assistance with the complex emotional reactions of patients and their families to terminal illness and death. (Author/DB)

  13. Extreme Low Aspect Ratio Stellarators

    NASA Astrophysics Data System (ADS)

    Moroz, Paul

    1997-11-01

    Recently proposed Spherical Stellarator (SS) concept [1] includes the devices with stellarator features and low aspect ratio, A <= 3.5, which is very unusual for stellarators (typical stellarators have A ≈ 7-10 or above). Strong bootstrap current and high-β equilibria are two distinguished elements of the SS concept leading to compact, steady-state, and efficient fusion reactor. Different coil configurations advantageous for the SS have been identified and analyzed [1-6]. In this report, we will present results on novel stellarator configurations which are unusual even for the SS approach. These are the extreme-low-aspect-ratio-stellarators (ELARS), with the aspect ratio A ≈ 1. We succeeded in finding ELARS configurations with extremely compact, modular, and simple design compatible with significant rotational transform (ι ≈ 0.1 - 0.15), large plasma volume, and good particle transport characteristics. [1] P.E. Moroz, Phys. Rev. Lett. 77, 651 (1996); [2] P.E. Moroz, Phys. Plasmas 3, 3055 (1996); [3] P.E. Moroz, D.B. Batchelor et al., Fusion Tech. 30, 1347 (1996); [4] P.E. Moroz, Stellarator News 48, 2 (1996); [5] P.E. Moroz, Plasma Phys. Reports 23, 502 (1997); [6] P.E. Moroz, Nucl. Fusion 37, No. 8 (1997). *Supported by DOE Grant No. DE-FG02-97ER54395.

  14. COBRA-IV PC: A personal computer version of COBRA-IV-I for thermal-hydraulic analysis of rod bundle nuclear fuel elements and cores

    SciTech Connect

    Webb, B.J.

    1988-01-01

    COBRA-IV PC is a modified version of COBRA-IV-I, adapted for use with most IBM PC and PC-compatible desktop computers. Like COBRA-IV-I, COBRA-IV PC uses the subchannel analysis approach to determine the enthalpy and flow distribution in rod bundles for both steady-state and transient conditions. The steady-state and transient solution schemes used in COBRA-IIIC are still available in COBRA-IV PC as the implicit solution scheme option. An explicit solution scheme is also available, allowing the calculation of severe transients involving flow reversals, recirculations, expulsions, and reentry flows, with a pressure or flow boundary condition specified. In addition, several modifications have been incorporated into COBRA-IV PC to allow the code to run on the PC. These include a reduction in the array dimensions, the removal of the dump and restart options, and the inclusion of several code modifications by Oregon State University, most notably, a critical heat flux correlation for boiling water reactor fuel and a new solution scheme for cross-flow distribution calculations. 7 refs., 8 figs., 1 tab.

  15. Computing and Digital Media: A Subject-Based Aspect Report by Education Scotland on Provision in Scotland's Colleges on Behalf of the Scottish Funding Council. Transforming Lives through Learning

    ERIC Educational Resources Information Center

    Education Scotland, 2014

    2014-01-01

    This report evaluates college programmes which deliver education and training in computer and digital media technology, rather than in computer usage. The report evaluates current practice and identifies important areas for further development amongst practitioners. It provides case studies of effective practice and sets out recommendations for…

  16. Elemental Education.

    ERIC Educational Resources Information Center

    Daniel, Esther Gnanamalar Sarojini; Saat, Rohaida Mohd.

    2001-01-01

    Introduces a learning module integrating three disciplines--physics, chemistry, and biology--and based on four elements: carbon, oxygen, hydrogen, and silicon. Includes atomic model and silicon-based life activities. (YDS)

  17. Superheavy Elements

    ERIC Educational Resources Information Center

    Tsang, Chin Fu

    1975-01-01

    Discusses the possibility of creating elements with an atomic number of around 114. Describes the underlying physics responsible for the limited extent of the periodic table and enumerates problems that must be overcome in creating a superheavy nucleus. (GS)

  18. A deflation based parallel algorithm for spectral element solution of the incompressible Navier-Stokes equations

    SciTech Connect

    Fischer, P.F.

    1996-12-31

    Efficient solution of the Navier-Stokes equations in complex domains is dependent upon the availability of fast solvers for sparse linear systems. For unsteady incompressible flows, the pressure operator is the leading contributor to stiffness, as the characteristic propagation speed is infinite. In the context of operator splitting formulations, it is the pressure solve which is the most computationally challenging, despite its elliptic origins. We seek to improve existing spectral element iterative methods for the pressure solve in order to overcome the slow convergence frequently observed in the presence of highly refined grids or high-aspect ratio elements.

  19. An efficient Mindlin finite strip plate element based on assumed strain distribution

    NASA Technical Reports Server (NTRS)

    Chulya, Abhisak; Thompson, Robert L.

    1988-01-01

    A simple two node, linear, finite strip plate bending element based on Mindlin-Reissner plate theory for the analysis of very thin to thick bridges, plates, and axisymmetric shells is presented. The new transverse shear strains are assumed for constant distribution in the two node linear strip. The important aspect is the choice of the points that relate the nodal displacements and rotations through the locking transverse shear strains. The element stiffness matrix is explicitly formulated for efficient computation and ease in computer implementation. Numerical results showing the efficiency and predictive capability of the element for analyzing plates with different supports, loading conditions, and a wide range of thicknesses are given. The results show no sign of the shear locking phenomenon.

  20. Conversion of Osculating Orbital Elements to Mean Orbital Elements

    NASA Technical Reports Server (NTRS)

    Der, Gim J.; Danchick, Roy

    1996-01-01

    Orbit determination and ephemeris generation or prediction over relatively long elapsed times can be accomplished with mean elements. The most simple and efficient method for orbit determination, which is also known as epoch point conversion, performs the conversion of osculating elements to mean elements by iterative procedures. Previous epoch point conversion methods are restricted to shorter elapsed times with linear convergence. The new method presented in this paper calculates an analytic initial guess of the unknown mean elements from a first order theory of secular perturbations and computes a transition matrix with accurate numerical partials. It thereby eliminates the problem of an inaccurate initial guess and an identity transition matrix employed by previous methods. With a good initial guess of the unknown mean elements and an accurate transition matrix, converging osculating elements to mean elements can be accomplished over long elapsed times with quadratic convergence.