Science.gov

Sample records for element computational aspects

  1. On current aspects of finite element computational fluid mechanics for turbulent flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1982-01-01

    A set of nonlinear partial differential equations suitable for the description of a class of turbulent three-dimensional flow fields in select geometries is identified. On the basis of the concept of enforcing a penalty constraint to ensure accurate accounting of ordering effects, a finite element numerical solution algorithm is established for the equation set and the theoretical aspects of accuracy, convergence and stability are identified and quantized. Hypermatrix constructions are used to formulate the reduction of the computational aspects of the theory to practice. The robustness of the algorithm, and the computer program embodiment, have been verified for pertinent flow configurations.

  2. Terminological aspects of data elements

    SciTech Connect

    Strehlow, R.A. ); Kenworthey, W.H. Jr. ); Schuldt, R.E. )

    1991-01-01

    The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language to a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.

  3. Computational aspects of multibody dynamics

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1989-01-01

    Computational aspects are addressed which impact the requirements for developing a next generation software system for flexible multibody dynamics simulation which include: criteria for selecting candidate formulation, pairing of formulations with appropriate solution procedures, need for concurrent algorithms to utilize computer hardware advances, and provisions for allowing open-ended yet modular analysis modules.

  4. The technical aspects of computers.

    PubMed

    Richards, B

    1990-12-01

    This chapter is concerned with the technical aspects of computers. It is therefore concerned with how computers came about in the way they did, and who were the people who pioneered their development--what they were like in the early years, what they are like now, and what are likely to be the future developments. The emphasis is always on giving information to the readers so that they may know what questions to ask of the experts and, equally important, which experts to spend time with. In consequence of this last statement it becomes necessary to present a panorama showing the range of computers both size-wise and cost-wise; such scenario will therefore cover the vista from large main-frames (which must inevitably be needed in District Health Authorities and District General Hospitals) to the desk-top personal computers which all clinicians of the future will find essential. Because readers will be experiencing the impact and, hopefully, the benefits of the computer at the lower end of the size and price scale, considerable space has been devoted to explaining the various items (disc drives, monitors, printers) that pervade the microcomputer scene. New terminology must be introduced to readers if they are to discuss intelligently their computer needs to the providers of such facilities. Just as an automobile is no use without oil, petrol, water and a competent user, so the computer hardware needs computer software and a competent user. The chapter therefore continued with some considerable space being devoted to software (operating systems, programming languages, utilities and expert systems) so that the user will have clear guidance as to which path to follow in order to become a competent user of the present and future technology. Because of the rapid advances in data storage, in networking and in computer programs, the clinicians of tomorrow will have vast sources of information at their disposal. This latter will include not only patient records, but also

  5. Finite element computational fluid mechanics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1983-01-01

    Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.

  6. Element-topology-independent preconditioners for parallel finite element computations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  7. Conceptual aspects of geometric quantum computation

    NASA Astrophysics Data System (ADS)

    Sjöqvist, Erik; Azimi Mousolou, Vahid; Canali, Carlo M.

    2016-07-01

    Geometric quantum computation is the idea that geometric phases can be used to implement quantum gates, i.e., the basic elements of the Boolean network that forms a quantum computer. Although originally thought to be limited to adiabatic evolution, controlled by slowly changing parameters, this form of quantum computation can as well be realized at high speed by using nonadiabatic schemes. Recent advances in quantum gate technology have allowed for experimental demonstrations of different types of geometric gates in adiabatic and nonadiabatic evolution. Here, we address some conceptual issues that arise in the realizations of geometric gates. We examine the appearance of dynamical phases in quantum evolution and point out that not all dynamical phases need to be compensated for in geometric quantum computation. We delineate the relation between Abelian and non-Abelian geometric gates and find an explicit physical example where the two types of gates coincide. We identify differences and similarities between adiabatic and nonadiabatic realizations of quantum computation based on non-Abelian geometric phases.

  8. Conceptual aspects of geometric quantum computation

    NASA Astrophysics Data System (ADS)

    Sjöqvist, Erik; Azimi Mousolou, Vahid; Canali, Carlo M.

    2016-10-01

    Geometric quantum computation is the idea that geometric phases can be used to implement quantum gates, i.e., the basic elements of the Boolean network that forms a quantum computer. Although originally thought to be limited to adiabatic evolution, controlled by slowly changing parameters, this form of quantum computation can as well be realized at high speed by using nonadiabatic schemes. Recent advances in quantum gate technology have allowed for experimental demonstrations of different types of geometric gates in adiabatic and nonadiabatic evolution. Here, we address some conceptual issues that arise in the realizations of geometric gates. We examine the appearance of dynamical phases in quantum evolution and point out that not all dynamical phases need to be compensated for in geometric quantum computation. We delineate the relation between Abelian and non-Abelian geometric gates and find an explicit physical example where the two types of gates coincide. We identify differences and similarities between adiabatic and nonadiabatic realizations of quantum computation based on non-Abelian geometric phases.

  9. Computer Security: The Human Element.

    ERIC Educational Resources Information Center

    Guynes, Carl S.; Vanacek, Michael T.

    1981-01-01

    The security and effectiveness of a computer system are dependent on the personnel involved. Improved personnel and organizational procedures can significantly reduce the potential for computer fraud. (Author/MLF)

  10. Mathematical aspects of finite element methods for incompressible viscous flows

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.

    1986-01-01

    Mathematical aspects of finite element methods are surveyed for incompressible viscous flows, concentrating on the steady primitive variable formulation. The discretization of a weak formulation of the Navier-Stokes equations are addressed, then the stability condition is considered, the satisfaction of which insures the stability of the approximation. Specific choices of finite element spaces for the velocity and pressure are then discussed. Finally, the connection between different weak formulations and a variety of boundary conditions is explored.

  11. Dedicated breast computed tomography: Basic aspects.

    PubMed

    Sarno, Antonio; Mettivier, Giovanni; Russo, Paolo

    2015-06-01

    X-ray mammography of the compressed breast is well recognized as the "gold standard" for early detection of breast cancer, but its performance is not ideal. One limitation of screening mammography is tissue superposition, particularly for dense breasts. Since 2001, several research groups in the USA and in the European Union have developed computed tomography (CT) systems with digital detector technology dedicated to x-ray imaging of the uncompressed breast (breast CT or BCT) for breast cancer screening and diagnosis. This CT technology--tracing back to initial studies in the 1970s--allows some of the limitations of mammography to be overcome, keeping the levels of radiation dose to the radiosensitive breast glandular tissue similar to that of two-view mammography for the same breast size and composition. This paper presents an evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R&D in the laboratory and/or in the clinic. The intended focus here is on the technological/engineering aspects of BCT and on outlining advantages and limitations as reported in the related literature. Prospects for future research in this field are discussed. PMID:26127031

  12. Computational and Practical Aspects of Drug Repositioning

    PubMed Central

    Oprea, Tudor I.

    2015-01-01

    Abstract The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the “barrier to entry” is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme—drug repositioning evidence level (DREL)—for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209

  13. Computational and Practical Aspects of Drug Repositioning.

    PubMed

    Oprea, Tudor I; Overington, John P

    2015-01-01

    The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the "barrier to entry" is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme-drug repositioning evidence level (DREL)-for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209

  14. Dedicated breast computed tomography: Basic aspects

    SciTech Connect

    Sarno, Antonio; Mettivier, Giovanni Russo, Paolo

    2015-06-15

    X-ray mammography of the compressed breast is well recognized as the “gold standard” for early detection of breast cancer, but its performance is not ideal. One limitation of screening mammography is tissue superposition, particularly for dense breasts. Since 2001, several research groups in the USA and in the European Union have developed computed tomography (CT) systems with digital detector technology dedicated to x-ray imaging of the uncompressed breast (breast CT or BCT) for breast cancer screening and diagnosis. This CT technology—tracing back to initial studies in the 1970s—allows some of the limitations of mammography to be overcome, keeping the levels of radiation dose to the radiosensitive breast glandular tissue similar to that of two-view mammography for the same breast size and composition. This paper presents an evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R and D in the laboratory and/or in the clinic. The intended focus here is on the technological/engineering aspects of BCT and on outlining advantages and limitations as reported in the related literature. Prospects for future research in this field are discussed.

  15. Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Sawamiphakdi, K.

    1984-01-01

    A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.

  16. Computational Aspects of Heat Transfer in Structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M. (Compiler)

    1982-01-01

    Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.

  17. Computational Aspects of Feedback in Neural Circuits

    PubMed Central

    Maass, Wolfgang; Joshi, Prashant; Sontag, Eduardo D

    2007-01-01

    It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also throw new light on the

  18. Aspects of computer vision in surgical endoscopy

    NASA Astrophysics Data System (ADS)

    Rodin, Vincent; Ayache, Alain; Berreni, N.

    1993-09-01

    This work is related to a project of medical robotics applied to surgical endoscopy, led in collaboration with Doctor Berreni from the Saint Roch nursing-home in Perpignan, France). After taking what Doctor Berreni advises, two aspects of endoscopic color image processing have been brought out: (1) The help to the diagnosis by the automatic detection of the sick areas after a learning phase. (2) The 3D reconstruction of the analyzed cavity by using a zoom.

  19. Computational aspects of Gaussian beam migration

    SciTech Connect

    Hale, D.

    1992-01-01

    The computational efficiency of Gaussian beam migration depends on the solution of two problems: (1) computation of complex-valued beam times and amplitudes in Cartesian (x,z) coordinates, and (2) limiting computations to only those (x,z) coordinates within a region where beam amplitudes are significant. The first problem can be reduced to a particular instance of a class of closest-point problems in computational geometry, for which efficient solutions, such as the Delaunay triangulation, are well known. Delaunay triangulation of sampled points along a ray enables the efficient location of that point on the raypath that is closest to any point (x,z) at which beam times and amplitudes are required. Although Delaunay triangulation provides an efficient solution to this closest point problem, a simpler solution, also presented in this paper, may be sufficient and more easily extended for use in 3-D Gaussian beam migration. The second problem is easily solved by decomposing the subsurface image into a coarse grid of square cells. Within each cell, simple and efficient loops over (x,z) coordinates may be used. Because the region in which beam amplitudes are significant may be difficult to represent with simple loops over (x,z) coordinates, I use recursion to move from cell to cell, until entire region defined by the beam has been covered. Benchmark tests of a computer program implementing these solutions suggest that the cost of Gaussian hewn migration is comparable to that of migration via explicit depth extrapolation in the frequency-space domain. For the data sizes and computer programs tested here, the explicit method was faster. However, as data size was increased, the computation time for Gaussian beam migration grew more slowly than that for the explicit method.

  20. Computational aspects of Gaussian beam migration

    SciTech Connect

    Hale, D.

    1992-08-01

    The computational efficiency of Gaussian beam migration depends on the solution of two problems: (1) computation of complex-valued beam times and amplitudes in Cartesian (x,z) coordinates, and (2) limiting computations to only those (x,z) coordinates within a region where beam amplitudes are significant. The first problem can be reduced to a particular instance of a class of closest-point problems in computational geometry, for which efficient solutions, such as the Delaunay triangulation, are well known. Delaunay triangulation of sampled points along a ray enables the efficient location of that point on the raypath that is closest to any point (x,z) at which beam times and amplitudes are required. Although Delaunay triangulation provides an efficient solution to this closest point problem, a simpler solution, also presented in this paper, may be sufficient and more easily extended for use in 3-D Gaussian beam migration. The second problem is easily solved by decomposing the subsurface image into a coarse grid of square cells. Within each cell, simple and efficient loops over (x,z) coordinates may be used. Because the region in which beam amplitudes are significant may be difficult to represent with simple loops over (x,z) coordinates, I use recursion to move from cell to cell, until entire region defined by the beam has been covered. Benchmark tests of a computer program implementing these solutions suggest that the cost of Gaussian hewn migration is comparable to that of migration via explicit depth extrapolation in the frequency-space domain. For the data sizes and computer programs tested here, the explicit method was faster. However, as data size was increased, the computation time for Gaussian beam migration grew more slowly than that for the explicit method.

  1. Analytical and Computational Aspects of Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Bilevel problem formulations have received considerable attention as an approach to multidisciplinary optimization in engineering. We examine the analytical and computational properties of one such approach, collaborative optimization. The resulting system-level optimization problems suffer from inherent computational difficulties due to the bilevel nature of the method. Most notably, it is impossible to characterize and hence identify solutions of the system-level problems because the standard first-order conditions for solutions of constrained optimization problems do not hold. The analytical features of the system-level problem make it difficult to apply conventional nonlinear programming algorithms. Simple examples illustrate the analysis and the algorithmic consequences for optimization methods. We conclude with additional observations on the practical implications of the analytical and computational properties of collaborative optimization.

  2. Central control element expands computer capability

    NASA Technical Reports Server (NTRS)

    Easton, R. A.

    1975-01-01

    Redundant processing and multiprocessing modes can be obtained from one computer by using logic configuration. Configuration serves as central control element which can automatically alternate between high-capacity multiprocessing mode and high-reliability redundant mode using dynamic mode switching in real time.

  3. Finite element computation with parallel VLSI

    NASA Technical Reports Server (NTRS)

    Mcgregor, J.; Salama, M.

    1983-01-01

    This paper describes a parallel processing computer consisting of a 16-bit microcomputer as a master processor which controls and coordinates the activities of 8086/8087 VLSI chip set slave processors working in parallel. The hardware is inexpensive and can be flexibly configured and programmed to perform various functions. This makes it a useful research tool for the development of, and experimentation with parallel mathematical algorithms. Application of the hardware to computational tasks involved in the finite element analysis method is demonstrated by the generation and assembly of beam finite element stiffness matrices. A number of possible schemes for the implementation of N-elements on N- or n-processors (N is greater than n) are described, and the speedup factors of their time consumption are determined as a function of the number of available parallel processors.

  4. Solving finite element equations on concurrent computers

    NASA Technical Reports Server (NTRS)

    Nour-Omid, B.; Raefsky, A.; Lyzenga, G.

    1987-01-01

    This paper discusses the development of a concurrent algorithm for the solution of systems of equations arising in finite element applications. The approach is based on a hybrid of direct elimination method and preconditioned conjugate iteration. Two different preconditioners are used; diagonal scaling and a concurrent implementation of incomplete LU factorization. First, an automatic procedure is used to partition the finite element mesh into sub-structures. The particular mesh partition is chosen to minimize an estimate of the cost for evaluating the solution using this algorithm on a concurrent computer. These procedures are implemented in a finite element program on the JPL/CalTech MARK III hypercube computer. An overview of the structure of this program is presented. The performance of the solution method is demonstrated with the aid of a number of numerical test runs, and its advantages for concurrent implementations are discussed. Efficiency and speed-up factors over sequential machines for the numerical examples are highlighted.

  5. Plane Smoothers for Multiblock Grids: Computational Aspects

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane

    1999-01-01

    Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.

  6. New developments in the CREAM Computing Element

    NASA Astrophysics Data System (ADS)

    Andreetto, Paolo; Bertocco, Sara; Capannini, Fabio; Cecchi, Marco; Dorigo, Alvise; Frizziero, Eric; Gianelle, Alessio; Mezzadri, Massimo; Monforte, Salvatore; Prelz, Francesco; Rebatto, David; Sgaravatto, Massimo; Zangrando, Luigi

    2012-12-01

    The EU-funded project EMI aims at providing a unified, standardized, easy to install software for distributed computing infrastructures. CREAM is one of the middleware products part of the EMI middleware distribution: it implements a Grid job management service which allows the submission, management and monitoring of computational jobs to local resource management systems. In this paper we discuss about some new features being implemented in the CREAM Computing Element. The implementation of the EMI Execution Service (EMI-ES) specification (an agreement in the EMI consortium on interfaces and protocols to be used in order to enable computational job submission and management required across technologies) is one of the new functions being implemented. New developments are also focusing in the High Availability (HA) area, to improve performance, scalability, availability and fault tolerance.

  7. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  8. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect

    Ravindrudu, Rahul

    2004-01-01

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  9. Computational aspects in mechanical modeling of the articular cartilage tissue.

    PubMed

    Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter

    2013-04-01

    This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.

  10. The case for biological quantum computer elements

    NASA Astrophysics Data System (ADS)

    Baer, Wolfgang; Pizzi, Rita

    2009-05-01

    An extension to vonNeumann's analysis of quantum theory suggests self-measurement is a fundamental process of Nature. By mapping the quantum computer to the brain architecture we will argue that the cognitive experience results from a measurement of a quantum memory maintained by biological entities. The insight provided by this mapping suggests quantum effects are not restricted to small atomic and nuclear phenomena but are an integral part of our own cognitive experience and further that the architecture of a quantum computer system parallels that of a conscious brain. We will then review the suggestions for biological quantum elements in basic neural structures and address the de-coherence objection by arguing for a self- measurement event model of Nature. We will argue that to first order approximation the universe is composed of isolated self-measurement events which guaranties coherence. Controlled de-coherence is treated as the input/output interactions between quantum elements of a quantum computer and the quantum memory maintained by biological entities cognizant of the quantum calculation results. Lastly we will present stem-cell based neuron experiments conducted by one of us with the aim of demonstrating the occurrence of quantum effects in living neural networks and discuss future research projects intended to reach this objective.

  11. Physical aspects of computing the flow of a viscous fluid

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1984-01-01

    One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.

  12. HYDRA, A finite element computational fluid dynamics code: User manual

    SciTech Connect

    Christon, M.A.

    1995-06-01

    HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.

  13. CREAM Computing Element: a status update

    NASA Astrophysics Data System (ADS)

    Andreetto, Paolo; Bertocco, Sara; Capannini, Fabio; Cecchi, Marco; Dorigo, Alvise; Frizziero, Eric; Gianelle, Alessio; Mezzadri, Massimo; Monforte, Salvatore; Prelz, Francesco; Rebatto, David; Sgaravatto, Massimo; Zangrando, Luigi

    2012-12-01

    The European Middleware Initiative (EMI) project aims to deliver a consolidated set of middleware products based on the four major middleware providers in Europe -ARC, dCache, gLite and UNICORE. The CREAM (Computing Resource Execution And Management) Service, a service for job management operation at the Computing Element (CE) level, is a software product which is part of the EMI middleware distribution. In this paper we discuss about some new functionality in the CREAM CE introduced with the first EMI major release (EMI-1, codename Kebnekaise). The integration with the Argus authorization service is one of these implementations: the use of a unique authorization system, besides simplifying the overall management, allows also to avoid inconsistent authorization decisions. An improved support for complex deployment scenarios (e.g. for sites having multiple CE head nodes and/or having heterogeneous resources) is another new achievement. The improved support for resource allocation in a multi-core environment, and the initial support of version 2.0 of the Glue specification for resource publication are other new functionalities introduced with the first EMI release.

  14. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  15. Computational aspects of growth-induced instabilities through eigenvalue analysis

    NASA Astrophysics Data System (ADS)

    Javili, A.; Dortdivanlioglu, B.; Kuhl, E.; Linder, C.

    2015-09-01

    The objective of this contribution is to establish a computational framework to study growth-induced instabilities. The common approach towards growth-induced instabilities is to decompose the deformation multiplicatively into its growth and elastic part. Recently, this concept has been employed in computations of growing continua and has proven to be extremely useful to better understand the material behavior under growth. While finite element simulations seem to be capable of predicting the behavior of growing continua, they often cannot naturally capture the instabilities caused by growth. The accepted strategy to provoke growth-induced instabilities is therefore to perturb the solution of the problem, which indeed results in geometric instabilities in the form of wrinkles and folds. However, this strategy is intrinsically subjective as the user is prescribing the perturbations and the simulations are often highly perturbation-dependent. We propose a different strategy that is inherently suitable for this problem, namely eigenvalue analysis. The main advantages of eigenvalue analysis are that first, no arbitrary, artificial perturbations are needed and second, it is, in general, independent of the time step size. Therefore, the solution obtained by this methodology is not subjective and thus, is generic and reproducible. Equipped with eigenvalue analysis, we are able to compute precisely the critical growth to initiate instabilities. Furthermore, this strategy allows us to compare different finite elements for this family of problems. Our results demonstrate that linear elements perform strikingly poorly, as compared to quadratic elements.

  16. On Undecidability Aspects of Resilient Computations and Implications to Exascale

    SciTech Connect

    Rao, Nageswara S

    2014-01-01

    Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

  17. Vitamins and trace elements: practical aspects of supplementation.

    PubMed

    Berger, Mette M; Shenkin, Alan

    2006-09-01

    The role of micronutrients in parenteral nutrition include the following: (1) Whenever artificial nutrition is indicated, micronutrients, i.e., vitamins and trace elements, should be given from the first day of artificial nutritional support. (2) Testing blood levels of vitamins and trace elements in acutely ill patients is of very limited value. By using sensible clinical judgment, it is possible to manage patients with only a small amount of laboratory testing. (3) Patients with major burns or major trauma and those with acute renal failure who are on continuous renal replacement therapy or dialysis quickly develop acute deficits in some micronutrients, and immediate supplementation is essential. (4) Other groups at risk are cancer patients, but also pregnant women with hyperemesis and people with anorexia nervosa or other malnutrition or malabsorption states. (5) Clinicians need to treat severe deficits before they become clinical deficiencies. If a patient develops a micronutrient deficiency state while in care, then there has been a severe failure of care. (6) In the early acute phase of recovery from critical illness, where artificial nutrition is generally not indicated, there may still be a need to deliver micronutrients to specific categories of very sick patients. (7) Ideally, trace element preparations should provide a low-manganese product for all and a manganese-free product for certain patients with liver disease. (8) High losses through excretion should be minimized by infusing micronutrients slowly, over as long a period as possible. To avoid interactions, it would be ideal to infuse trace elements and vitamins separately: the trace elements over an initial 12-h period and the vitamins over the next 12-h period. (9) Multivitamin and trace element preparations suitable for most patients requiring parenteral nutrition are widely available, but individual patients may require additional supplements or smaller amounts of certain micronutrients

  18. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  19. Control aspects of quantum computing using pure and mixed states.

    PubMed

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J

    2012-10-13

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.

  20. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  1. Concurrent multiresolution finite element: formulation and algorithmic aspects

    NASA Astrophysics Data System (ADS)

    Tang, Shan; Kopacz, Adrian M.; Chan O'Keeffe, Stephanie; Olson, Gregory B.; Liu, Wing Kam

    2013-12-01

    A multiresolution concurrent theory for heterogenous materials is proposed with novel macro scale and micro scale constitutive laws that include the plastic yield function at different length scales. In contrast to the conventional plasticity, the plastic flow at the micro zone depends on the plastic strain gradient. The consistency condition at the macro and micro zones can result in a set of algebraic equations. Using appropriate boundary conditions, the finite element discretization was derived from a variational principle with the extra degrees of freedom for the micro zones. In collaboration with LSTC Inc, the degrees of freedom at the micro zone and their related history variables have been augmented in LS-DYNA. The 3D multiresolution theory has been implemented. Shear band propagation and the large scale simulation of a shear driven ductile fracture process were carried out. Our results show that the proposed multiresolution theory in combination with the parallel implementation into LS-DYNA can capture the effects of the microstructure on shear band propagation and allows for realistic modeling of ductile fracture process.

  2. Cohesive surface model for fracture based on a two-scale formulation: computational implementation aspects

    NASA Astrophysics Data System (ADS)

    Toro, S.; Sánchez, P. J.; Podestá, J. M.; Blanco, P. J.; Huespe, A. E.; Feijóo, R. A.

    2016-10-01

    The paper describes the computational aspects and numerical implementation of a two-scale cohesive surface methodology developed for analyzing fracture in heterogeneous materials with complex micro-structures. This approach can be categorized as a semi-concurrent model using the representative volume element concept. A variational multi-scale formulation of the methodology has been previously presented by the authors. Subsequently, the formulation has been generalized and improved in two aspects: (i) cohesive surfaces have been introduced at both scales of analysis, they are modeled with a strong discontinuity kinematics (new equations describing the insertion of the macro-scale strains, into the micro-scale and the posterior homogenization procedure have been considered); (ii) the computational procedure and numerical implementation have been adapted for this formulation. The first point has been presented elsewhere, and it is summarized here. Instead, the main objective of this paper is to address a rather detailed presentation of the second point. Finite element techniques for modeling cohesive surfaces at both scales of analysis (FE^2 approach) are described: (i) finite elements with embedded strong discontinuities are used for the macro-scale simulation, and (ii) continuum-type finite elements with high aspect ratios, mimicking cohesive surfaces, are adopted for simulating the failure mechanisms at the micro-scale. The methodology is validated through numerical simulation of a quasi-brittle concrete fracture problem. The proposed multi-scale model is capable of unveiling the mechanisms that lead from the material degradation phenomenon at the meso-structural level to the activation and propagation of cohesive surfaces at the structural scale.

  3. Cohesive surface model for fracture based on a two-scale formulation: computational implementation aspects

    NASA Astrophysics Data System (ADS)

    Toro, S.; Sánchez, P. J.; Podestá, J. M.; Blanco, P. J.; Huespe, A. E.; Feijóo, R. A.

    2016-07-01

    The paper describes the computational aspects and numerical implementation of a two-scale cohesive surface methodology developed for analyzing fracture in heterogeneous materials with complex micro-structures. This approach can be categorized as a semi-concurrent model using the representative volume element concept. A variational multi-scale formulation of the methodology has been previously presented by the authors. Subsequently, the formulation has been generalized and improved in two aspects: (i) cohesive surfaces have been introduced at both scales of analysis, they are modeled with a strong discontinuity kinematics (new equations describing the insertion of the macro-scale strains, into the micro-scale and the posterior homogenization procedure have been considered); (ii) the computational procedure and numerical implementation have been adapted for this formulation. The first point has been presented elsewhere, and it is summarized here. Instead, the main objective of this paper is to address a rather detailed presentation of the second point. Finite element techniques for modeling cohesive surfaces at both scales of analysis (FE^2 approach) are described: (i) finite elements with embedded strong discontinuities are used for the macro-scale simulation, and (ii) continuum-type finite elements with high aspect ratios, mimicking cohesive surfaces, are adopted for simulating the failure mechanisms at the micro-scale. The methodology is validated through numerical simulation of a quasi-brittle concrete fracture problem. The proposed multi-scale model is capable of unveiling the mechanisms that lead from the material degradation phenomenon at the meso-structural level to the activation and propagation of cohesive surfaces at the structural scale.

  4. Optically intraconnected computer employing dynamically reconfigurable holographic optical element

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A. (Inventor)

    1992-01-01

    An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.

  5. Computational aspects of steel fracturing pertinent to naval requirements.

    PubMed

    Matic, Peter; Geltmacher, Andrew; Rath, Bhakta

    2015-03-28

    Modern high strength and ductile steels are a key element of US Navy ship structural technology. The development of these alloys spurred the development of modern structural integrity analysis methods over the past 70 years. Strength and ductility provided the designers and builders of navy surface ships and submarines with the opportunity to reduce ship structural weight, increase hull stiffness, increase damage resistance, improve construction practices and reduce maintenance costs. This paper reviews how analytical and computational tools, driving simulation methods and experimental techniques, were developed to provide ongoing insights into the material, damage and fracture characteristics of these alloys. The need to understand alloy fracture mechanics provided unique motivations to measure and model performance from structural to microstructural scales. This was done while accounting for the highly nonlinear behaviours of both materials and underlying fracture processes. Theoretical methods, data acquisition strategies, computational simulation and scientific imaging were applied to increasingly smaller scales and complex materials phenomena under deformation. Knowledge gained about fracture resistance was used to meet minimum fracture initiation, crack growth and crack arrest characteristics as part of overall structural integrity considerations. PMID:25713445

  6. Computational aspects of steel fracturing pertinent to naval requirements.

    PubMed

    Matic, Peter; Geltmacher, Andrew; Rath, Bhakta

    2015-03-28

    Modern high strength and ductile steels are a key element of US Navy ship structural technology. The development of these alloys spurred the development of modern structural integrity analysis methods over the past 70 years. Strength and ductility provided the designers and builders of navy surface ships and submarines with the opportunity to reduce ship structural weight, increase hull stiffness, increase damage resistance, improve construction practices and reduce maintenance costs. This paper reviews how analytical and computational tools, driving simulation methods and experimental techniques, were developed to provide ongoing insights into the material, damage and fracture characteristics of these alloys. The need to understand alloy fracture mechanics provided unique motivations to measure and model performance from structural to microstructural scales. This was done while accounting for the highly nonlinear behaviours of both materials and underlying fracture processes. Theoretical methods, data acquisition strategies, computational simulation and scientific imaging were applied to increasingly smaller scales and complex materials phenomena under deformation. Knowledge gained about fracture resistance was used to meet minimum fracture initiation, crack growth and crack arrest characteristics as part of overall structural integrity considerations.

  7. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  8. Algorithms for computer detection of symmetry elements in molecular systems.

    PubMed

    Beruski, Otávio; Vidal, Luciano N

    2014-02-01

    Simple procedures for the location of proper and improper rotations and reflexion planes are presented. The search is performed with a molecule divided into subsets of symmetrically equivalent atoms (SEA) which are analyzed separately as if they were a single molecule. This approach is advantageous in many aspects. For instance, in those molecules that are symmetric rotors, the number of atoms and the inertia tensor of the SEA provide one straight way to find proper rotations of any order. The algorithms are invariant to the molecular orientation and their computational cost is low, because the main information required to find symmetry elements is interatomic distances and the principal moments of the SEA. For example, our Fortran implementation, running on a single processor, took only a few seconds to locate all 120 symmetry operations of the large and highly symmetrical fullerene C720, belonging to the Ih point group. Finally, we show how the interatomic distances matrix of a slightly unsymmetrical molecule is used to symmetrize its geometry. PMID:24403016

  9. The Impact of Instructional Elements in Computer-Based Instruction

    ERIC Educational Resources Information Center

    Martin, Florence; Klein, James D.; Sullivan, Howard

    2007-01-01

    This study investigated the effects of several elements of instruction (objectives, information, practice, examples and review) when they were combined in a systematic manner. College students enrolled in a computer literacy course used one of six different versions of a computer-based lesson delivered on the web to learn about input, processing,…

  10. Acceleration of matrix element computations for precision measurements

    SciTech Connect

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.

  11. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  12. Introducing the Practical Aspects of Computational Chemistry to Undergraduate Chemistry Students

    ERIC Educational Resources Information Center

    Pearson, Jason K.

    2007-01-01

    Various efforts are being made to introduce the different physical aspects and uses of computational chemistry to the undergraduate chemistry students. A new laboratory approach that demonstrates all such aspects via experiments has been devised for the purpose.

  13. Finite element dynamic analysis on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lambiotte, J. J., Jr.

    1978-01-01

    Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.

  14. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  15. Some aspects of the computer simulation of conduction heat transfer and phase change processes

    SciTech Connect

    Solomon, A. D.

    1982-04-01

    Various aspects of phase change processes in materials are discussd including computer modeling, validation of results and sensitivity. In addition, the possible incorporation of cognitive activities in computational heat transfer is examined.

  16. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Astrophysics Data System (ADS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-10-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  17. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  18. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  19. A computational study of nodal-based tetrahedral element behavior.

    SciTech Connect

    Gullerud, Arne S.

    2010-09-01

    This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.

  20. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  1. Computational design aspects of a NASP nozzle/afterbody experiment

    NASA Technical Reports Server (NTRS)

    Ruffin, Stephen M.; Venkatapathy, Ethiraj; Keener, Earl R.; Nagaraj, N.

    1989-01-01

    This paper highlights the influence of computational methods on design of a wind tunnel experiment which generically models the nozzle/afterbody flow field of the proposed National Aerospace Plane. The rectangular slot nozzle plume flow field is computed using a three-dimensional, upwind, implicit Navier-Stokes solver. Freestream Mach numbers of 5.3, 7.3, and 10 are investigated. Two-dimensional parametric studies of various Mach numbers, pressure ratios, and ramp angles are used to help determine model loads and afterbody ramp angle and length. It was found that the center of pressure on the ramp occurs at nearly the same location for all ramp angles and test conditions computed. Also, to prevent air liquefaction, it is suggested that a helium-air mixture be used as the jet gas for the highest Mach number test case.

  2. The spectral-element method, Beowulf computing, and global seismology.

    PubMed

    Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

    2002-11-29

    The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content.

  3. A stochastic method for computing hadronic matrix elements

    SciTech Connect

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; Drach, Vincent; Jansen, Karl; Hadjiyiannakou, Kyriakos; Renner, Dru B.

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  4. Transient Finite Element Computations on a Variable Transputer System

    NASA Technical Reports Server (NTRS)

    Smolinski, Patrick J.; Lapczyk, Ireneusz

    1993-01-01

    A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.

  5. Implicit extrapolation methods for multilevel finite element computations

    SciTech Connect

    Jung, M.; Ruede, U.

    1994-12-31

    The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.

  6. Some Aspects of uncertainty in computational fluid dynamics results

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1991-01-01

    Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.

  7. Technical Aspects of Computer-Assisted Instruction in Chinese.

    ERIC Educational Resources Information Center

    Cheng, Chin-Chaun; Sherwood, Bruce

    1981-01-01

    Computer assisted instruction in Chinese is considered in relation to the design and recognition of Chinese characters, speech synthesis of the standard Chinese language, and the identification of Chinese tone. The PLATO work has shifted its orientation from provision of supplementary courseware to implementation of independent lessons and…

  8. Huber's M-estimation in relative GPS positioning: computational aspects

    NASA Astrophysics Data System (ADS)

    Chang, X.-W.; Guo, Y.

    2005-08-01

    When GPS signal measurements have outliers, using least squares (LS) estimation is likely to give poor position estimates. One of the typical approaches to handle this problem is to use robust estimation techniques. We study the computational issues of Huber’s M-estimation applied to relative GPS positioning. First for code-based relative positioning, we use simulation results to show that Newton’s method usually converges faster than the iteratively reweighted least squares (IRLS) method, which is often used in geodesy for computing robust estimates of parameters. Then for code- and carrier-phase-based relative positioning, we present a recursive modified Newton method to compute Huber’s M-estimates of the positions. The structures of the model are exploited to make the method efficient, and orthogonal transformations are used to ensure numerical reliability of the method. Economical use of computer memory is also taken into account in designing the method. Simulation results show that the method is effective.

  9. Acceleration of matrix element computations for precision measurements

    DOE PAGESBeta

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix elementmore » technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.« less

  10. Compute Element and Interface Box for the Hazard Detection System

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.; Spaulding, Matthew D.; Luna, Michael E.; Motaghedi, Shui H.; Trawny, Nikolas; Johnson, Andrew E.; Ivanov, Tonislav I.; Huertas, Andres; Whitaker, William D.; Goldberg, Steven B.

    2013-01-01

    The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.

  11. Computational aspects of sensitivity calculations in linear transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, W. H.; Haftka, R. T.

    1991-01-01

    The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.

  12. Theoretical aspects of light-element alloys under extremely high pressure

    NASA Astrophysics Data System (ADS)

    Feng, Ji

    In this Dissertation, we present theoretical studies on the geometric and electronic structure of light-element alloys under high pressure. The first three Chapters are concerned with specific compounds, namely, SiH 4, CaLi2 and BexLi1- x, and associated structural and electronic phenomena, arising in our computational studies. In the fourth Chapter, we attempt to develop a unified view of the relationship between the electronic and geometric structure of light-element alloys under pressure, by focusing on the states near the Fermi level in these metals.

  13. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1989-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  14. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  15. Computational aspects of the continuum quaternionic wave functions for hydrogen

    SciTech Connect

    Morais, J.

    2014-10-15

    Over the past few years considerable attention has been given to the role played by the Hydrogen Continuum Wave Functions (HCWFs) in quantum theory. The HCWFs arise via the method of separation of variables for the time-independent Schrödinger equation in spherical coordinates. The HCWFs are composed of products of a radial part involving associated Laguerre polynomials multiplied by exponential factors and an angular part that is the spherical harmonics. In the present paper we introduce the continuum wave functions for hydrogen within quaternionic analysis ((R)QHCWFs), a result which is not available in the existing literature. In particular, the underlying functions are of three real variables and take on either values in the reduced and full quaternions (identified, respectively, with R{sup 3} and R{sup 4}). We prove that the (R)QHCWFs are orthonormal to one another. The representation of these functions in terms of the HCWFs are explicitly given, from which several recurrence formulae for fast computer implementations can be derived. A summary of fundamental properties and further computation of the hydrogen-like atom transforms of the (R)QHCWFs are also discussed. We address all the above and explore some basic facts of the arising quaternionic function theory. As an application, we provide the reader with plot simulations that demonstrate the effectiveness of our approach. (R)QHCWFs are new in the literature and have some consequences that are now under investigation.

  16. Behavioral and computational aspects of language and its acquisition

    NASA Astrophysics Data System (ADS)

    Edelman, Shimon; Waterfall, Heidi

    2007-12-01

    One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.

  17. Computational and theoretical aspects of biomolecular structure and dynamics

    SciTech Connect

    Garcia, A.E.; Berendzen, J.; Catasti, P., Chen, X.

    1996-09-01

    This is the final report for a project that sought to evaluate and develop theoretical, and computational bases for designing, performing, and analyzing experimental studies in structural biology. Simulations of large biomolecular systems in solution, hydrophobic interactions, and quantum chemical calculations for large systems have been performed. We have developed a code that implements the Fast Multipole Algorithm (FMA) that scales linearly in the number of particles simulated in a large system. New methods have been developed for the analysis of multidimensional NMR data in order to obtain high resolution atomic structures. These methods have been applied to the study of DNA sequences in the human centromere, sequences linked to genetic diseases, and the dynamics and structure of myoglobin.

  18. Aspects of Quantum Computing with Polar Paramagnetic Molecules

    NASA Astrophysics Data System (ADS)

    Karra, Mallikarjun; Friedrich, Bretislav

    2015-05-01

    Since the original proposal by DeMille, arrays of optically trapped ultracold polar molecules have been considered among the most promising prototype platforms for the implementation of a quantum computer. The qubit of a molecular array is realized by a single dipolar molecule entangled via its dipole-dipole interaction with the rest of the array's molecules. A superimposed inhomogeneous electric field precludes the quenching of the body-fixed dipole moments by rotation and a time dependent external field controls the qubits to perform gate operations. Much like our previous work in which we considered the simplest cases of a polar 1 Σ and a symmetric top molecule, here we consider a X2Π3 / 2 polar molecule (exemplified by the OH radical) which, by virtue of its nonzero electronic spin and orbital angular momenta, is, in addition, paramagnetic. We demonstrate entanglement tuning by evaluating the concurrence (and the requisite frequencies needed for gate operations) between two such molecules in the presence of varying electric and magnetic fields. Finally, we discuss the conditions required for achieving qubit addressability (transition frequency difference, Δω , as compared with the concomitant Stark and Zeeman broadening) and high fidelity. International Max Planck Research School - Functional Interfaces in Physics and Chemistry.

  19. SYMBMAT: Symbolic computation of quantum transition matrix elements

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.

    2012-08-01

    We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem

  20. Massively parallel computation of RCS with finite elements

    NASA Technical Reports Server (NTRS)

    Parker, Jay

    1993-01-01

    One of the promising combinations of finite element approaches for scattering problems uses Whitney edge elements, spherical vector wave-absorbing boundary conditions, and bi-conjugate gradient solution for the frequency-domain near field. Each of these approaches may be criticized. Low-order elements require high mesh density, but also result in fast, reliable iterative convergence. Spherical wave-absorbing boundary conditions require additional space to be meshed beyond the most minimal near-space region, but result in fully sparse, symmetric matrices which keep storage and solution times low. Iterative solution is somewhat unpredictable and unfriendly to multiple right-hand sides, yet we find it to be uniformly fast on large problems to date, given the other two approaches. Implementation of these approaches on a distributed memory, message passing machine yields huge dividends, as full scalability to the largest machines appears assured and iterative solution times are well-behaved for large problems. We present times and solutions for computed RCS for a conducting cube and composite permeability/conducting sphere on the Intel ipsc860 with up to 16 processors solving over 200,000 unknowns. We estimate problems of approximately 10 million unknowns, encompassing 1000 cubic wavelengths, may be attempted on a currently available 512 processor machine, but would be exceedingly tedious to prepare. The most severe bottlenecks are due to the slow rate of mesh generation on non-parallel machines and the large transfer time from such a machine to the parallel processor. One solution, in progress, is to create and then distribute a coarse mesh among the processors, followed by systematic refinement within each processor. Elimination of redundant node definitions at the mesh-partition surfaces, snap-to-surface post processing of the resulting mesh for good modelling of curved surfaces, and load-balancing redistribution of new elements after the refinement are auxiliary

  1. Incorporating Knowledge of Legal and Ethical Aspects into Computing Curricula of South African Universities

    ERIC Educational Resources Information Center

    Wayman, Ian; Kyobe, Michael

    2012-01-01

    As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…

  2. FLASH: A finite element computer code for variably saturated flow

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  3. SYMBMAT: Symbolic computation of quantum transition matrix elements

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.

    2012-08-01

    We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem

  4. Computationally efficient finite element evaluation of natural patellofemoral mechanics.

    PubMed

    Fitzpatrick, Clare K; Baldwin, Mark A; Rullkoetter, Paul J

    2010-12-01

    Finite element methods have been applied to evaluate in vivo joint behavior, new devices, and surgical techniques but have typically been applied to a small or single subject cohort. Anatomic variability necessitates the use of many subject-specific models or probabilistic methods in order to adequately evaluate a device or procedure for a population. However, a fully deformable finite element model can be computationally expensive, prohibiting large multisubject or probabilistic analyses. The aim of this study was to develop a group of subject-specific models of the patellofemoral joint and evaluate trade-offs in analysis time and accuracy with fully deformable and rigid body articular cartilage representations. Finite element models of eight subjects were used to tune a pressure-overclosure relationship during a simulated deep flexion cycle. Patellofemoral kinematics and contact mechanics were evaluated and compared between a fully deformable and a rigid body analysis. Additional eight subjects were used to determine the validity of the rigid body pressure-overclosure relationship as a subject-independent parameter. There was good agreement in predicted kinematics and contact mechanics between deformable and rigid analyses for both the tuned and test groups. Root mean square differences in kinematics were less than 0.5 deg and 0.2 mm for both groups throughout flexion. Differences in contact area and peak and average contact pressures averaged 5.4%, 9.6%, and 3.8%, respectively, for the tuned group and 6.9%, 13.1%, and 6.4%, respectively, for the test group, with no significant differences between the two groups. There was a 95% reduction in computational time with the rigid body analysis as compared with the deformable analysis. The tuned pressure-overclosure relationship derived from the patellofemoral analysis was also applied to tibiofemoral (TF) articular cartilage in a group of eight subjects. Differences in contact area and peak and average contact

  5. A variational multiscale finite element method for monolithic ALE computations of shock hydrodynamics using nodal elements

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Scovazzi, G.

    2016-06-01

    We present a monolithic arbitrary Lagrangian-Eulerian (ALE) finite element method for computing highly transient flows with strong shocks. We use a variational multiscale (VMS) approach to stabilize a piecewise-linear Galerkin formulation of the equations of compressible flows, and an entropy artificial viscosity to capture strong solution discontinuities. Our work demonstrates the feasibility of VMS methods for highly transient shock flows, an area of research for which the VMS literature is extremely scarce. In addition, the proposed monolithic ALE method is an alternative to the more commonly used Lagrangian+remap methods, in which, at each time step, a Lagrangian computation is followed by mesh smoothing and remap (conservative solution interpolation). Lagrangian+remap methods are the methods of choice in shock hydrodynamics computations because they provide nearly optimal mesh resolution in proximity of shock fronts. However, Lagrangian+remap methods are not well suited for imposing inflow and outflow boundary conditions. These issues offer an additional motivation for the proposed approach, in which we first perform the mesh motion, and then the flow computations using the monolithic ALE framework. The proposed method is second-order accurate and stable, as demonstrated by extensive numerical examples in two and three space dimensions.

  6. Overview of adaptive finite element analysis in computational geodynamics

    NASA Astrophysics Data System (ADS)

    May, D. A.; Schellart, W. P.; Moresi, L.

    2013-10-01

    The use of numerical models to develop insight and intuition into the dynamics of the Earth over geological time scales is a firmly established practice in the geodynamics community. As our depth of understanding grows, and hand-in-hand with improvements in analytical techniques and higher resolution remote sensing of the physical structure and state of the Earth, there is a continual need to develop more efficient, accurate and reliable numerical techniques. This is necessary to ensure that we can meet the challenge of generating robust conclusions, interpretations and predictions from improved observations. In adaptive numerical methods, the desire is generally to maximise the quality of the numerical solution for a given amount of computational effort. Neither of these terms has a unique, universal definition, but typically there is a trade off between the number of unknowns we can calculate to obtain a more accurate representation of the Earth, and the resources (time and computational memory) required to compute them. In the engineering community, this topic has been extensively examined using the adaptive finite element (AFE) method. Recently, the applicability of this technique to geodynamic processes has started to be explored. In this review we report on the current status and usage of spatially adaptive finite element analysis in the field of geodynamics. The objective of this review is to provide a brief introduction to the area of spatially adaptive finite analysis, including a summary of different techniques to define spatial adaptation and of different approaches to guide the adaptive process in order to control the discretisation error inherent within the numerical solution. An overview of the current state of the art in adaptive modelling in geodynamics is provided, together with a discussion pertaining to the issues related to using adaptive analysis techniques and perspectives for future research in this area. Additionally, we also provide a

  7. Impact of computer advances on future finite elements computations. [for aircraft and spacecraft design

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.

    1985-01-01

    Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

  8. Human-computer interaction: psychological aspects of the human use of computing.

    PubMed

    Olson, Gary M; Olson, Judith S

    2003-01-01

    Human-computer interaction (HCI) is a multidisciplinary field in which psychology and other social sciences unite with computer science and related technical fields with the goal of making computing systems that are both useful and usable. It is a blend of applied and basic research, both drawing from psychological research and contributing new ideas to it. New technologies continuously challenge HCI researchers with new options, as do the demands of new audiences and uses. A variety of usability methods have been developed that draw upon psychological principles. HCI research has expanded beyond its roots in the cognitive processes of individual users to include social and organizational processes involved in computer usage in real environments as well as the use of computers in collaboration. HCI researchers need to be mindful of the longer-term changes brought about by the use of computing in a variety of venues. PMID:12209025

  9. Human-computer interaction: psychological aspects of the human use of computing.

    PubMed

    Olson, Gary M; Olson, Judith S

    2003-01-01

    Human-computer interaction (HCI) is a multidisciplinary field in which psychology and other social sciences unite with computer science and related technical fields with the goal of making computing systems that are both useful and usable. It is a blend of applied and basic research, both drawing from psychological research and contributing new ideas to it. New technologies continuously challenge HCI researchers with new options, as do the demands of new audiences and uses. A variety of usability methods have been developed that draw upon psychological principles. HCI research has expanded beyond its roots in the cognitive processes of individual users to include social and organizational processes involved in computer usage in real environments as well as the use of computers in collaboration. HCI researchers need to be mindful of the longer-term changes brought about by the use of computing in a variety of venues.

  10. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  11. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    NASA Astrophysics Data System (ADS)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  12. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  13. A computer program for calculating aerodynamic characteristics of low aspect-ratio wings with partial leading-edge separation

    NASA Technical Reports Server (NTRS)

    Mehrotra, S. C.; Lan, C. E.

    1978-01-01

    The necessary information for using a computer program to predict distributed and total aerodynamic characteristics for low aspect ratio wings with partial leading-edge separation is presented. The flow is assumed to be steady and inviscid. The wing boundary condition is formulated by the Quasi-Vortex-Lattice method. The leading edge separated vortices are represented by discrete free vortex elements which are aligned with the local velocity vector at midpoints to satisfy the force free condition. The wake behind the trailing edge is also force free. The flow tangency boundary condition is satisfied on the wing, including the leading and trailing edges. The program is restricted to delta wings with zero thickness and no camber. It is written in FORTRAN language and runs on CDC 6600 computer.

  14. A finite element method for the computation of transonic flow past airfoils

    NASA Technical Reports Server (NTRS)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  15. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which

  16. Segment-based vs. element-based integration for mortar methods in computational contact mechanics

    NASA Astrophysics Data System (ADS)

    Farah, Philipp; Popp, Alexander; Wall, Wolfgang A.

    2015-01-01

    Mortar finite element methods provide a very convenient and powerful discretization framework for geometrically nonlinear applications in computational contact mechanics, because they allow for a variationally consistent treatment of contact conditions (mesh tying, non-penetration, frictionless or frictional sliding) despite the fact that the underlying contact surface meshes are non-matching and possibly also geometrically non-conforming. However, one of the major issues with regard to mortar methods is the design of adequate numerical integration schemes for the resulting interface coupling terms, i.e. curve integrals for 2D contact problems and surface integrals for 3D contact problems. The way how mortar integration is performed crucially influences the accuracy of the overall numerical procedure as well as the computational efficiency of contact evaluation. Basically, two different types of mortar integration schemes, which will be termed as segment-based integration and element-based integration here, can be found predominantly in the literature. While almost the entire existing literature focuses on either of the two mentioned mortar integration schemes without questioning this choice, the intention of this paper is to provide a comprehensive and unbiased comparison. The theoretical aspects covered here include the choice of integration rule, the treatment of boundaries of the contact zone, higher-order interpolation and frictional sliding. Moreover, a new hybrid scheme is proposed, which beneficially combines the advantages of segment-based and element-based mortar integration. Several numerical examples are presented for a detailed and critical evaluation of the overall performance of the different schemes within several well-known benchmark problems of computational contact mechanics.

  17. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  18. Some Computational Aspects of the Brain Computer Interfaces Based on Inner Music

    PubMed Central

    Klonowski, Wlodzimierz; Duch, Wlodzisław; Perovic, Aleksandar; Jovanovic, Aleksandar

    2009-01-01

    We discuss the BCI based on inner tones and inner music. We had some success in the detection of inner tones, the imagined tones which are not sung aloud. Rather easily imagined and controlled, they offer a set of states usable for BCI, with high information capacity and high transfer rates. Imagination of sounds or musical tunes could provide a multicommand language for BCI, as if using the natural language. Moreover, this approach could be used to test musical abilities. Such BCI interface could be superior when there is a need for a broader command language. Some computational estimates and unresolved difficulties are presented. PMID:19503802

  19. A shell element for computing 3D eddy currents -- Applications to transformers

    SciTech Connect

    Guerin, C.; Tanneau, G.; Meunier, G.; Labie, P.; Ngnegueu, T.; Sacotte, M.

    1995-05-01

    A skin depth-independent shell element to model thin conducting sheets is described in a finite element context. This element takes into account the field variation through depth due to skin effect. The finite element formulation is first described, then boundary conditions at the edge of conducting shells and the possibility of describing non conducting line gaps and holes are discussed. Finally, a computation of an earthing transformer model with an aluminium shield modelled with shell elements is presented.

  20. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  1. Grid generation for two-dimensional finite element flowfield computation

    NASA Technical Reports Server (NTRS)

    Tatum, K. E.

    1980-01-01

    The finite element method for fluid dynamics was used to develop a two dimensional mesh generation scheme. The method consists of shearing and conformal maps with upper and lower surfaces handled independently to allow sharp leading edges. The method also generates meshes of triangular or quadrilateral elements.

  2. 01010000 01001100 01000001 01011001: Play Elements in Computer Programming

    ERIC Educational Resources Information Center

    Breslin, Samantha

    2013-01-01

    This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…

  3. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  4. Improved plug valve computer-aided design of plug element

    SciTech Connect

    Wordin, J.J.

    1990-02-01

    The purpose of this document is to present derivations of equations for the design of a plug valve and to present a computer program which performs the design calculations based on the derivations. The valve is based on a plug formed from a tractrix of revolution called a pseudosphere. It is of interest to be able to calculate various parameters for the plug for design purposes. For example, the surface area, volume, and center of gravity are important to determine friction and wear of the valve. A computer program in BASIC has been written to perform the design calculations. The appendix contains a computer program listing and verifications of results using approximation methods. A sample run is included along with necessary computer commands to run the program. 1 fig.

  5. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  6. Finite Element Method for Thermal Analysis. [with computer program

    NASA Technical Reports Server (NTRS)

    Heuser, J.

    1973-01-01

    A two- and three-dimensional, finite-element thermal-analysis program which handles conduction with internal heat generation, convection, radiation, specified flux, and specified temperature boundary conditions is presented. Elements used in the program are the triangle and tetrahedron for two- and three-dimensional analysis, respectively. The theory used in the program is developed, and several sample problems demonstrating the capability and reliability of the program are presented. A guide to using the program, description of the input cards, and program listing are included.

  7. Some aspects of statistical distribution of trace element concentrations in biomedical samples

    NASA Astrophysics Data System (ADS)

    Majewska, U.; Braziewicz, J.; Banaś , D.; Kubala-Kukuś , A.; Góź Dź , S.; Pajek, M.; Zadrozsolarna, M.; Jaskóla, M.; Czyzsolarewski, T.

    1999-04-01

    Concentrations of trace elements in biomedical samples were studied using X-ray fluorescence (XRF), total reflection X-ray fluorescence (TRXRF) and particle-induced X-ray emission (PIXE) methods. Used analytical methods were compared in terms of their detection limits and applicability for studying the trace elements in large populations of biomedical samples. In a result, the XRF and TRXRF methods were selected to be used for the trace element concentration measurements in the urine and woman full-term placenta samples. The measured trace element concentration distributions were found to be strongly asymmetric and described by the logarithmic-normal distribution. Such a distribution is expected for the random sequential process, which realistically models a level of trace elements in studied biomedical samples. The importance and consequences of this finding are discussed, especially in the context of comparison of the concentration measurements in different populations of biomedical samples.

  8. Computational aspects of zonal algorithms for solving the compressible Navier-Stokes equations in three dimensions

    NASA Technical Reports Server (NTRS)

    Holst, T. L.; Thomas, S. D.; Kaynak, U.; Gundy, K. L.; Flores, J.; Chaderjian, N. M.

    1985-01-01

    Transonic flow fields about wing geometries are computed using an Euler/Navier-Stokes approach in which the flow field is divided into several zones. The flow field immediately adjacent to the wing surface is resolved with fine grid zones and solved using a Navier-Stokes algorithm. Flow field regions removed from the wing are resolved with less finely clustered grid zones and are solved with an Euler algorithm. Computational issues associated with this zonal approach, including data base management aspects, are discussed. Solutions are obtained that are in good agreement with experiment, including cases with significant wind tunnel wall effects. Additional cases with significant shock induced separation on the upper wing surface are also presented.

  9. Numerical algorithms for finite element computations on arrays of microprocessors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1981-01-01

    The development of a multicolored successive over relaxation (SOR) program for the finite element machine is discussed. The multicolored SOR method uses a generalization of the classical Red/Black grid point ordering for the SOR method. These multicolored orderings have the advantage of allowing the SOR method to be implemented as a Jacobi method, which is ideal for arrays of processors, but still enjoy the greater rate of convergence of the SOR method. The program solves a general second order self adjoint elliptic problem on a square region with Dirichlet boundary conditions, discretized by quadratic elements on triangular regions. For this general problem and discretization, six colors are necessary for the multicolored method to operate efficiently. The specific problem that was solved using the six color program was Poisson's equation; for Poisson's equation, three colors are necessary but six may be used. In general, the number of colors needed is a function of the differential equation, the region and boundary conditions, and the particular finite element used for the discretization.

  10. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review.

    PubMed

    Bhattacharya, Preeti Tomar; Misra, Satya Ranjan; Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  11. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review

    PubMed Central

    Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  12. Formulation and computational aspects of plasticity and damage models with application to quasi-brittle materials

    SciTech Connect

    Chen, Z.; Schreyer, H.L.

    1995-09-01

    The response of underground structures and transportation facilities under various external loadings and environments is critical for human safety as well as environmental protection. Since quasi-brittle materials such as concrete and rock are commonly used for underground construction, the constitutive modeling of these engineering materials, including post-limit behaviors, is one of the most important aspects in safety assessment. From experimental, theoretical, and computational points of view, this report considers the constitutive modeling of quasi-brittle materials in general and concentrates on concrete in particular. Based on the internal variable theory of thermodynamics, the general formulations of plasticity and damage models are given to simulate two distinct modes of microstructural changes, inelastic flow and degradation of material strength and stiffness, that identify the phenomenological nonlinear behaviors of quasi-brittle materials. The computational aspects of plasticity and damage models are explored with respect to their effects on structural analyses. Specific constitutive models are then developed in a systematic manner according to the degree of completeness. A comprehensive literature survey is made to provide the up-to-date information on prediction of structural failures, which can serve as a reference for future research.

  13. Computational discovery of regulatory elements in a continuous expression space

    PubMed Central

    2012-01-01

    Approaches for regulatory element discovery from gene expression data usually rely on clustering algorithms to partition the data into clusters of co-expressed genes. Gene regulatory sequences are then mined to find overrepresented motifs in each cluster. However, this ad hoc partition rarely fits the biological reality. We propose a novel method called RED2 that avoids data clustering by estimating motif densities locally around each gene. We show that RED2 detects numerous motifs not detected by clustering-based approaches, and that most of these correspond to characterized motifs. RED2 can be accessed online through a user-friendly interface. PMID:23186104

  14. FINITE ELEMENT MODELS FOR COMPUTING SEISMIC INDUCED SOIL PRESSURES ON DEEPLY EMBEDDED NUCLEAR POWER PLANT STRUCTURES.

    SciTech Connect

    XU, J.; COSTANTINO, C.; HOFMAYER, C.

    2006-06-26

    PAPER DISCUSSES COMPUTATIONS OF SEISMIC INDUCED SOIL PRESSURES USING FINITE ELEMENT MODELS FOR DEEPLY EMBEDDED AND OR BURIED STIFF STRUCTURES SUCH AS THOSE APPEARING IN THE CONCEPTUAL DESIGNS OF STRUCTURES FOR ADVANCED REACTORS.

  15. Cells on biomaterials--some aspects of elemental analysis by means of electron probes.

    PubMed

    Tylko, G

    2016-02-01

    Electron probe X-ray microanalysis enables concomitant observation of specimens and analysis of their elemental composition. The method is attractive for engineers developing tissue-compatible biomaterials. Either changes in element composition of cells or biomaterial can be defined according to well-established preparation and quantification procedures. However, the qualitative and quantitative elemental analysis appears more complicated when cells or thin tissue sections are deposited on biomaterials. X-ray spectra generated at the cell/tissue-biomaterial interface are modelled using a Monte Carlo simulation of a cell deposited on borosilicate glass. Enhanced electron backscattering from borosilicate glass was noted until the thickness of the biological layer deposited on the substrate reached 1.25 μm. It resulted in significant increase in X-ray intensities typical for the elements present in the cellular part. In this case, the mean atomic number value of the biomaterial determines the strength of this effect. When elements are present in the cells only, the positive linear relationship appears between X-ray intensities and cell thickness. Then, spatial dimensions of X-ray emission for the particular elements are exclusively in the range of the biological part and the intensities of X-rays become constant. When the elements are present in both the cell and the biomaterial, X-ray intensities are registered for the biological part and the substrate simultaneously leading to a negative linear relationship of X-ray intensities in the function of cell thickness. In the case of the analysis of an element typical for the biomaterial, strong decrease in X-ray emission is observed in the function of cell thickness as the effect of X-ray absorption and the limited excitation range to biological part rather than to the substrate. Correction procedures for calculations of element concentrations in thin films and coatings deposited on substrates are well established in

  16. Computer modeling of batteries from non-linear circuit elements

    NASA Technical Reports Server (NTRS)

    Waaben, S.; Federico, J.; Moskowitz, I.

    1983-01-01

    A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.

  17. Experience with automatic, dynamic load balancing and adaptive finite element computation

    SciTech Connect

    Wheat, S.R.; Devine, K.D.; Maccabe, A.B.

    1993-10-01

    Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.

  18. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    ERIC Educational Resources Information Center

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  19. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  20. A computer program for anisotropic shallow-shell finite elements using symbolic integration

    NASA Technical Reports Server (NTRS)

    Andersen, C. M.; Bowen, J. T.

    1976-01-01

    A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.

  1. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    None

    2016-07-12

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  2. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    SciTech Connect

    2010-05-11

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  3. Computation of vibration mode elastic-rigid and effective weight coefficients from finite-element computer program output

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1991-01-01

    Post-processing algorithms are given to compute the vibratory elastic-rigid coupling matrices and the modal contributions to the rigid-body mass matrices and to the effective modal inertias and masses. Recomputation of the elastic-rigid coupling matrices for a change in origin is also described. A computational example is included. The algorithms can all be executed by using standard finite-element program eigenvalue analysis output with no changes to existing code or source programs.

  4. Modeling of Rolling Element Bearing Mechanics: Computer Program Updates

    NASA Technical Reports Server (NTRS)

    Ryan, S. G.

    1997-01-01

    The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.

  5. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Large fractions of Cl and Br associated with separated anorthositic and basaltic clasts and matrix from rusty rock 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O to acid soluble Br, i.e. surface deposits vs possibly phosphate related Br, suggests no appreciable alteration in the original distributions of this element. Weak acid leaching dissolved approx. 50% or more of the phosphorus and of the remaining Cl from most of the breccia components. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. No dependence on degree of brecciation is indicated. The clasts are typical of Apollo 16 rocks. Matrix leaching results and element concentrations suggest that apatite-whitlockite is a component of KREEP.

  6. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Halogens, P, U and Na are reported in anorthositic and basaltic clasts and matrix from rusty rock 66095. Large fractions of Cl and Br associated with the separated phases from 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O- to 0.1 M HNO/sub 3/-soluble Br in the various components suggests no appreciable alteration in the original distributions of this element in the breccia forming processes. Up to 50% or more of the phosphorus and of the non-H/sub 2/O-soluble Cl was dissolved from most of the breccia components by 0.1 M HNO/sub 3/. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. Evidence that phosphates are the major P-phases in the brecia is based on the 0.1 M acid solubility of Cl and P in the matrix sample and on elemental concentrations which are consistent with those of KREEP.

  7. Multibody system dynamics for bio-inspired locomotion: from geometric structures to computational aspects.

    PubMed

    Boyer, Frédéric; Porez, Mathieu

    2015-04-01

    This article presents a set of generic tools for multibody system dynamics devoted to the study of bio-inspired locomotion in robotics. First, archetypal examples from the field of bio-inspired robot locomotion are presented to prepare the ground for further discussion. The general problem of locomotion is then stated. In considering this problem, we progressively draw a unified geometric picture of locomotion dynamics. For that purpose, we start from the model of discrete mobile multibody systems (MMSs) that we progressively extend to the case of continuous and finally soft systems. Beyond these theoretical aspects, we address the practical problem of the efficient computation of these models by proposing a Newton-Euler-based approach to efficient locomotion dynamics with a few illustrations of creeping, swimming, and flying. PMID:25811531

  8. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  9. Computational design of low aspect ratio wing-winglet configurations for transonic wind-tunnel tests

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Brown, Christopher K.

    1989-01-01

    Computational designs were performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three configurations was selected to be constructed as a wind tunnel model for testing in the NASA LaRC 8-foot transonic pressure tunnel. A design point of M = 0.8, C(sub L) is approximate or = to 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 deg and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. The design process and the predicted transonic performance are summarized for each configuration. In addition, a companion low-speed design study was conducted, using one of the transonic design wing-winglet planforms but with different camber and thickness distributions. A low-speed wind tunnel model was constructed to match this low-speed design geometry, and force coefficient data were obtained for the model at speeds of 100 to 150 ft/sec. Measured drag coefficient reductions were of the same order of magnitude as those predicted by numerical subsonic performance predictions.

  10. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  11. Acceleration of computer-generated hologram by Greatly Reduced Array of Processor Element with Data Reduction

    NASA Astrophysics Data System (ADS)

    Sugiyama, Atsushi; Masuda, Nobuyuki; Oikawa, Minoru; Okada, Naohisa; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2014-11-01

    We have implemented a computer-generated hologram (CGH) calculation on Greatly Reduced Array of Processor Element with Data Reduction (GRAPE-DR) processors. The cost of CGH calculation is enormous, but CGH calculation is well suited to parallel computation. The GRAPE-DR is a multicore processor that has 512 processor elements. The GRAPE-DR supports a double-precision floating-point operation and can perform CGH calculation with high accuracy. The calculation speed of the GRAPE-DR system is seven times faster than that of a personal computer with an Intel Core i7-950 processor.

  12. Molecular Computing And The Chemical Elements Of Logic

    NASA Astrophysics Data System (ADS)

    Carter, Forrest L.

    1986-02-01

    Future developments in molecular electronicsi-b not only offer the possibility of high density archival memories, 1015 to 1018 gates/cc, but also new routes to fabrication of high levels of parallel processors (> 106) and hence to new computer architectures. A central theme of molecular electronics is that information can be stored as conformational changes in chemical moieties or functional groups. Further, these functional units are chosen or designed so that their structure facilitates the storage of information via reversible conformational changes, either in bond distances or in bond angles, or both. In exploring possible switching and information storage mechanisms at the molecular-size level, it has become apparent that there are many analogues or alternatives possible for any logical function which might be desired. It is even more exciting to realize that some structural chemical units or configurations offer completely new functional or logical capabilities. The example offered below is the molecular analogue of the CASE statement in PASCAL (proposed by an NRL summer student employee7). As suggested in the title, one of the purposes of this article is to enhance the appreciation of the universality of the 'chemical' or 'molecular' systems to express logical functions. The literature on molecular electronic concepts is growing and some reviews are available1-4. Two Molecular Electronic Device (MED) workshops5-6 have been held in Washington, D.C. (1981 and 1983) and an International Symposium on Bioelectric and Molecular Electronic Devices 8 was held in Tokyo, 20-21 November 1985. Beyond the strong interest current in Japan9, interest is also developing in England and Soviet block11.

  13. Mixing characteristics of injector elements in liquid rocket engines - A computational study

    NASA Technical Reports Server (NTRS)

    Lohr, Jonathan C.; Trinh, Huu P.

    1992-01-01

    A computational study has been performed to better understand the mixing characteristics of liquid rocket injector elements. Variations in injector geometry as well as differences in injector element inlet flow conditions are among the areas examined in the study. Most results involve the nonreactive mixing of gaseous fuel with gaseous oxidizer but preliminary results are included that involve the spray combustion of oxidizer droplets. The purpose of the study is to numerically predict flowfield behavior in individual injector elements to a high degree of accuracy and in doing so to determine how various injector element properties affect the flow.

  14. Computational local stiffness analysis of biological cell: High aspect ratio single wall carbon nanotube tip.

    PubMed

    TermehYousefi, Amin; Bagheri, Samira; Shahnazar, Sheida; Rahman, Md Habibur; Kadri, Nahrizul Adib

    2016-02-01

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nanoscale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems, which is a powerful finite element (FE) tool to perform the numerical analysis and visualize the interactions between proposed tip and membrane of the cell. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney-Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well as the applied force of CNT-AFM tip on the contact area of the cell. This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis. PMID:26652417

  15. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  16. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  17. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  18. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  19. A New Finite Element Approach for Prediction of Aerothermal Loads: Progress in Inviscid Flow Computations

    NASA Technical Reports Server (NTRS)

    Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.

    1985-01-01

    Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.

  20. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  1. A different aspect to use of some soft computing methods for landslide susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Akgün, Aykut

    2014-05-01

    In landslide literature, several applications of soft computing methods such as artifical neural networks (ANN), fuzzy inference systems, and decision trees for landslide susceptibility mapping can be found. In many of these studies, the effectiveness and validation of the models used are also discussed. To carry out analyses, more than one software, for example one statistical package and one geographical information systems software (GIS), are generally used together. In this study, four different soft computing techniques were applied for obtaining landslide susceptibility mapping only by one GIS software. For this purpose, Multi Layer Perceptron (MLP) back propagation neural network, Fuzzy Adaptive Resonance Theory (ARTMAP) neural network, Self-organizing Map (SOM) and Classification Tree Analysis (CTA) approaches were applied to the study area. The study area was selected from a part of Trabzon (North Turkey) city which is one of the most landslide prone areas in Turkey. Initially, five landslide conditioning parameters such as lithology, slope gradient, slope aspect, stream power index (SPI), and topographical wetness index (TWI) for the study area were produced in GIS. Then, these parameters were analysed by MLP, Fuzzy ARTMAP, SOM and CART soft computing classifiers of the IDRISI Taiga GIS and remote sensing software. To accomplish the analyses, two main input groups are needed. These are conditioning parameters and training areas. For training areas, initially, landslide inventory map which was obtained by both field studies and topographical analyses was compared with lithological unit classes. With the help of these comparison, frequency ratio (FR) values of landslide occurrence in the study area were determined. Using the FR values, five landslide susceptibility classes were differentiated from the lowest FR to highest FR values. After this differentiation, the training areas representing the landslide susceptibility classes were determined by using FR

  2. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  3. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  4. Determination of an Initial Mesh Density for Finite Element Computations via Data Mining

    SciTech Connect

    Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V

    2001-07-23

    Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.

  5. Numerical Aspects of Nonhydrostatic Implementations Applied to a Parallel Finite Element Tsunami Model

    NASA Astrophysics Data System (ADS)

    Fuchs, A.; Androsov, A.; Harig, S.; Hiller, W.; Rakowsky, N.

    2012-04-01

    Based on the jeopardy of devastating tsunamis and the unpredictability of such events, tsunami modelling as part of warning systems is still a contemporary topic. The tsunami group of Alfred Wegener Institute developed the simulation tool TsunAWI as contribution to the Early Warning System in Indonesia. Although the precomputed scenarios for this purpose qualify for satisfying deliverables, the study of further improvements continues. While TsunAWI is governed by the Shallow Water Equations, an extension of the model is based on a nonhydrostatic approach. At the arrival of a tsunami wave in coastal regions with rough bathymetry, the term containing the nonhydrostatic part of pressure, that is neglected in the original hydrostatic model, gains in importance. In consideration of this term, a better approximation of the wave is expected. Differences of hydrostatic and nonhydrostatic model results are contrasted in the standard benchmark problem of a solitary wave runup on a plane beach. The observation data provided by Titov and Synolakis (1995) serves as reference. The nonhydrostatic approach implies a set of equations that are similar to the Shallow Water Equations, so the variation of the code can be implemented on top. However, this additional routines cause a lot of issues you have to cope with. So far the computations of the model were purely explicit. In the nonhydrostatic version the determination of an additional unknown and the solution of a large sparse system of linear equations is necessary. The latter constitutes the lion's share of computing time and memory requirement. Since the corresponding matrix is only symmetric in structure and not in values, an iterative Krylov Subspace Method is used, in particular the restarted Generalized Minimal Residual Algorithm GMRES(m). With regard to optimization, we present a comparison of several combinations of sequential and parallel preconditioning techniques respective number of iterations and setup

  6. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  7. Computing forces on interface elements exerted by dislocations in an elastically anisotropic crystalline material

    NASA Astrophysics Data System (ADS)

    Liu, B.; Arsenlis, A.; Aubry, S.

    2016-06-01

    Driven by the growing interest in numerical simulations of dislocation–interface interactions in general crystalline materials with elastic anisotropy, we develop algorithms for the integration of interface tractions needed to couple dislocation dynamics with a finite element or boundary element solver. The dislocation stress fields in elastically anisotropic media are made analytically accessible through the spherical harmonics expansion of the derivative of Green’s function, and analytical expressions for the forces on interface elements are derived by analytically integrating the spherical harmonics series recursively. Compared with numerical integration by Gaussian quadrature, the newly developed analytical algorithm for interface traction integration is highly beneficial in terms of both computation precision and speed.

  8. Computational aspects of endoscopic (trans-rectal) near-infrared optical tomography: initial investigations

    NASA Astrophysics Data System (ADS)

    Musgrove, Cameron; Bunting, Charles F.; Dehghani, Hamid; Pogue, Brian W.; Piao, Daqing

    2007-02-01

    Endoscopic near-infrared (NIR) optical tomography is a novel approach that allows the blood-based high intrinsic optical contrast to be imaged for the detection of cancer in internal organs. In endoscopic NIR tomography, the imaging array is arranged within the interior of the medium as opposed to the exterior as seen in conventional NIR tomography approaches. The source illuminates outward from the circular NIR probe, and the detector collects the diffused light from the medium surrounding the NIR probe. This new imaging geometry may involve forward and inverse approaches that are significantly different from those used in conventional NIR tomography. The implementation of a hollow-centered forward mesh within the context of conventional NIR tomography reconstruction has already led to the first demonstration of endoscopic NIR optical tomography. This paper presents some fundamental computational aspects regarding the performance and sensitivity of this endoscopic NIR tomography configuration. The NIRFAST modeling and image reconstruction package developed for conventional circular NIR geometry is used for endoscopic NIR tomography, and initial quantitative analysis has been conducted to investigate the "effective" imaging depth, required mesh resolution, and limit in contrast resolution, among other parameters. This study will define the performance expected and may provide insights into hardware requirements needed for revision of NIRFAST for the endoscopic NIR tomography geometry.

  9. Self-Consistent Large-Scale Magnetosphere-Ionosphere Coupling: Computational Aspects and Experiments

    NASA Technical Reports Server (NTRS)

    Newman, Timothy S.

    2003-01-01

    Both external and internal phenomena impact the terrestrial magnetosphere. For example, solar wind and particle precipitation effect the distribution of hot plasma in the magnetosphere. Numerous models exist to describe different aspects of magnetosphere characteristics. For example, Tsyganenko has developed a series of models (e.g., [TSYG89]) that describe the magnetic field, and Stern [STER75] and Volland [VOLL73] have developed an analytical model that describes the convection electric field. Over the past several years, NASA colleague Khazanov, working with Fok and others, has developed a large-scale coupled model that tracks particle flow to determine hot ion and electron phase space densities in the magnetosphere. This model utilizes external data such as solar wind densities and velocities and geomagnetic indices (e.g., Kp) to drive computational processes that evaluate magnetic, electric field, and plasma sheet models at any time point. These models are coupled such that energetic ion and electron fluxes are produced, with those fluxes capable of interacting with the electric field model. A diagrammatic representation of the coupled model is shown.

  10. A FORTRAN computer code for calculating flows in multiple-blade-element cascades

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1985-01-01

    A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.

  11. Learning the Lexical Aspects of a Second Language at Different Proficiencies: A Neural Computational Study

    ERIC Educational Resources Information Center

    Cuppini, Cristiano; Magosso, Elisa; Ursino, Mauro

    2013-01-01

    We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a…

  12. A Computational and Experimental Study of Nonlinear Aspects of Induced Drag

    NASA Technical Reports Server (NTRS)

    Smith, Stephen C.

    1996-01-01

    Despite the 80-year history of classical wing theory, considerable research has recently been directed toward planform and wake effects on induced drag. Nonlinear interactions between the trailing wake and the wing offer the possibility of reducing drag. The nonlinear effect of compressibility on induced drag characteristics may also influence wing design. This thesis deals with the prediction of these nonlinear aspects of induced drag and ways to exploit them. A potential benefit of only a few percent of the drag represents a large fuel savings for the world's commercial transport fleet. Computational methods must be applied carefully to obtain accurate induced drag predictions. Trefftz-plane drag integration is far more reliable than surface pressure integration, but is very sensitive to the accuracy of the force-free wake model. The practical use of Trefftz plane drag integration was extended to transonic flow with the Tranair full-potential code. The induced drag characteristics of a typical transport wing were studied with Tranair, a full-potential method, and A502, a high-order linear panel method to investigate changes in lift distribution and span efficiency due to compressibility. Modeling the force-free wake is a nonlinear problem, even when the flow governing equation is linear. A novel method was developed for computing the force-free wake shape. This hybrid wake-relaxation scheme couples the well-behaved nature of the discrete vortex wake with viscous-core modeling and the high-accuracy velocity prediction of the high-order panel method. The hybrid scheme produced converged wake shapes that allowed accurate Trefftz-plane integration. An unusual split-tip wing concept was studied for exploiting nonlinear wake interaction to reduced induced drag. This design exhibits significant nonlinear interactions between the wing and wake that produced a 12% reduction in induced drag compared to an equivalent elliptical wing at a lift coefficient of 0.7. The

  13. Interactive computer graphic surface modeling of three-dimensional solid domains for boundary element analysis

    NASA Technical Reports Server (NTRS)

    Perucchio, R.; Ingraffea, A. R.

    1984-01-01

    The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.

  14. Learning by statistical cooperation of self-interested neuron-like computing elements.

    PubMed

    Barto, A G

    1985-01-01

    Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

  15. Automatic data generation scheme for finite-element method /FEDGE/ - Computer program

    NASA Technical Reports Server (NTRS)

    Akyuz, F.

    1970-01-01

    Algorithm provides for automatic input data preparation for the analysis of continuous domains in the fields of structural analysis, heat transfer, and fluid mechanics. The computer program utilizes the natural coordinate systems concept and the finite element method for data generation.

  16. COYOTE: a finite-element computer program for nonlinear heat-conduction problems

    SciTech Connect

    Gartling, D.K.

    1982-10-01

    COYOTE is a finite element computer program designed for the solution of two-dimensional, nonlinear heat conduction problems. The theoretical and mathematical basis used to develop the code is described. Program capabilities and complete user instructions are presented. Several example problems are described in detail to demonstrate the use of the program.

  17. A computer simulation of grain orientation and aspect ratio that promotes the reflection of a pressure wave by elastic rotational stress

    NASA Astrophysics Data System (ADS)

    Kennefick, C. M.; Patillo, C. E.; Kupoluyi, T.; Gomes, C. A.

    2011-02-01

    Optimal orientation angles and aspect ratios of a grain are presented for the attenuation of a longitudinal pressure wave by elastic stresses that arise from the rotation of a grain. A computer program in C++ allows the grain to be a two-dimensional ellipse of several orientations with respect to the incoming load. The program also varies the aspect ratio of the grain. The induced elastic stresses from the rotation of the grain are calculated with complex variable methods that do not require meshes and elements. Low aspect ratios of 5/3, 10/7 and 5/4 were particularly effective in halting the stress from the pressure wave when the major axis of the grain was tilted between 15° and 45° and again above 70° with respect to the line of the incoming load. Attenuation was found to be more sensitive to grain orientation than to aspect ratio. The conclusion is supported by numerous switches in the extent of wave blockage over small angular variations in the orientation of the grain.

  18. Finite element simulation of the mechanical impact of computer work on the carpal tunnel syndrome.

    PubMed

    Mouzakis, Dionysios E; Rachiotis, George; Zaoutsos, Stefanos; Eleftheriou, Andreas; Malizos, Konstantinos N

    2014-09-22

    Carpal tunnel syndrome (CTS) is a clinical disorder resulting from the compression of the median nerve. The available evidence regarding the association between computer use and CTS is controversial. There is some evidence that computer mouse or keyboard work, or both are associated with the development of CTS. Despite the availability of pressure measurements in the carpal tunnel during computer work (exposure to keyboard or mouse) there are no available data to support a direct effect of the increased intracarpal canal pressure on the median nerve. This study presents an attempt to simulate the direct effects of computer work on the whole carpal area section using finite element analysis. A finite element mesh was produced from computerized tomography scans of the carpal area, involving all tissues present in the carpal tunnel. Two loading scenarios were applied on these models based on biomechanical data measured during computer work. It was found that mouse work can produce large deformation fields on the median nerve region. Also, the high stressing effect of the carpal ligament was verified. Keyboard work produced considerable and heterogeneous elongations along the longitudinal axis of the median nerve. Our study provides evidence that increased intracarpal canal pressures caused by awkward wrist postures imposed during computer work were associated directly with deformation of the median nerve. Despite the limitations of the present study the findings could be considered as a contribution to the understanding of the development of CTS due to exposure to computer work.

  19. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  20. Computational Modeling For The Transitional Flow Over A Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Liou, William W.; Liu, Feng-Jun; Rumsey, Chris L. (Technical Monitor)

    2000-01-01

    The transitional flow over a multi-element airfoil in a landing configuration are computed using a two equation transition model. The transition model is predictive in the sense that the transition onset is a result of the calculation and no prior knowledge of the transition location is required. The computations were performed using the INS2D) Navier-Stokes code. Overset grids are used for the three-element airfoil. The airfoil operating conditions are varied for a range of angle of attack and for two different Reynolds numbers of 5 million and 9 million. The computed results are compared with experimental data for the surface pressure, skin friction, transition onset location, and velocity magnitude. In general, the comparison shows a good agreement with the experimental data.

  1. Real-Time Nonlinear Finite Element Computations on GPU - Application to Neurosurgical Simulation

    PubMed Central

    Joldes, Grand Roman; Wittek, Adam; Miller, Karol

    2010-01-01

    Application of biomechanical modeling techniques in the area of medical image analysis and surgical simulation implies two conflicting requirements: accurate results and high solution speeds. Accurate results can be obtained only by using appropriate models and solution algorithms. In our previous papers we have presented algorithms and solution methods for performing accurate nonlinear finite element analysis of brain shift (which includes mixed mesh, different non-linear material models, finite deformations and brain-skull contacts) in less than a minute on a personal computer for models having up to 50.000 degrees of freedom. In this paper we present an implementation of our algorithms on a Graphics Processing Unit (GPU) using the new NVIDIA Compute Unified Device Architecture (CUDA) which leads to more than 20 times increase in the computation speed. This makes possible the use of meshes with more elements, which better represent the geometry, are easier to generate, and provide more accurate results. PMID:21179562

  2. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  3. Experimental and Computational Investigation of Lift-Enhancing Tabs on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1996-01-01

    An experimental and computational investigation of the effect of lift-enhancing tabs on a two-element airfoil has been conducted. The objective of the study was to develop an understanding of the flow physics associated with lift-enhancing tabs on a multi-element airfoil. An NACA 63(2)-215 ModB airfoil with a 30% chord fowler flap was tested in the NASA Ames 7- by 10-Foot Wind Tunnel. Lift-enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. A combination of tabs located at the main element and flap trailing edges increased the airfoil lift coefficient by 11% relative to the highest lift coefficient achieved by any baseline configuration at an angle of attack of 0 deg, and C(sub 1max) was increased by 3%. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computed results predicted all of the trends observed in the experimental data quite well. In addition, a simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift-enhancing tabs work. The tabs were modeled by a point vortex at the air-foil or flap trailing edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift-enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.

  4. Finite Element Simulation Code for Computing Thermal Radiation from a Plasma

    NASA Astrophysics Data System (ADS)

    Nguyen, C. N.; Rappaport, H. L.

    2004-11-01

    A finite element code, ``THERMRAD,'' for computing thermal radiation from a plasma is under development. Radiation from plasma test particles is found in cylindrical geometry. Although the plasma equilibrium is assumed axisymmetric individual test particle excitation produces a non-axisymmetric electromagnetic response. Specially designed Whitney class basis functions are to be used to allow the solution to be solved on a two-dimensional grid. The basis functions enforce both a vanishing of the divergence of the electric field within grid elements where the complex index of refraction is assumed constant and continuity of tangential electric field across grid elements while allowing the normal component of the electric field to be discontinuous. An appropriate variational principle which incorporates the Sommerfeld radiation condition on the simulation boundary, as well as its discretization by the Rayleigh-Ritz technique is given. 1. ``Finte Element Method for Electromagnetics Problems,'' Volakis et al., Wiley, 1998.

  5. STARS: A general-purpose finite element computer program for analysis of engineering structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1984-01-01

    STARS (Structural Analysis Routines) is primarily an interactive, graphics-oriented, finite-element computer program for analyzing the static, stability, free vibration, and dynamic responses of damped and undamped structures, including rotating systems. The element library consists of one-dimensional (1-D) line elements, two-dimensional (2-D) triangular and quadrilateral shell elements, and three-dimensional (3-D) tetrahedral and hexahedral solid elements. These elements enable the solution of structural problems that include truss, beam, space frame, plane, plate, shell, and solid structures, or any combination thereof. Zero, finite, and interdependent deflection boundary conditions can be implemented by the program. The associated dynamic response analysis capability provides for initial deformation and velocity inputs, whereas the transient excitation may be either forces or accelerations. An effective in-core or out-of-core solution strategy is automatically employed by the program, depending on the size of the problem. Data input may be at random within a data set, and the program offers certain automatic data-generation features. Input data are formatted as an optimal combination of free and fixed formats. Interactive graphics capabilities enable convenient display of nodal deformations, mode shapes, and element stresses.

  6. Applications of Parallel Computation in Micro-Mechanics and Finite Element Method

    NASA Technical Reports Server (NTRS)

    Tan, Hui-Qian

    1996-01-01

    This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.

  7. Design of computer-generated beam-shaping holograms by iterative finite-element mesh adaption.

    PubMed

    Dresel, T; Beyerlein, M; Schwider, J

    1996-12-10

    Computer-generated phase-only holograms can be used for laser beam shaping, i.e., for focusing a given aperture with intensity and phase distributions into a pregiven intensity pattern in their focal planes. A numerical approach based on iterative finite-element mesh adaption permits the design of appropriate phase functions for the task of focusing into two-dimensional reconstruction patterns. Both the hologram aperture and the reconstruction pattern are covered by mesh mappings. An iterative procedure delivers meshes with intensities equally distributed over the constituting elements. This design algorithm adds new elementary focuser functions to what we call object-oriented hologram design. Some design examples are discussed.

  8. COYOTE II - a finite element computer program for nonlinear heat conduction problems. Part I - theoretical background

    SciTech Connect

    Gartling, D.K.; Hogan, R.E.

    1994-10-01

    The theoretical and numerical background for the finite element computer program, COYOTE II, is presented in detail. COYOTE II is designed for the multi-dimensional analysis of nonlinear heat conduction problems and other types of diffusion problems. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in COYOTE II are also outlined. Instructions for use of the code are documented in SAND94-1179; examples of problems analyzed with the code are provided in SAND94-1180.

  9. Level set discrete element method for three-dimensional computations with triaxial case study

    NASA Astrophysics Data System (ADS)

    Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.

    2016-06-01

    In this paper, we outline the level set discrete element method (LS-DEM) which is a discrete element method variant able to simulate systems of particles with arbitrary shape using level set functions as a geometric basis. This unique formulation allows seamless interfacing with level set-based characterization methods as well as computational ease in contact calculations. We then apply LS-DEM to simulate two virtual triaxial specimens generated from XRCT images of experiments and demonstrate LS-DEM's ability to quantitatively capture and predict stress-strain and volume-strain behavior observed in the experiments.

  10. MAPVAR - A Computer Program to Transfer Solution Data Between Finite Element Meshes

    SciTech Connect

    Wellman, G.W.

    1999-03-01

    MAPVAR, as was the case with its precursor programs, MERLIN and MERLIN II, is designed to transfer solution results from one finite element mesh to another. MAPVAR draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR are described. User instructions are presented. Example problems are included to demonstrate the operation of the code and the effects of various input options.

  11. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.

  12. Report of a Workshop on the Pedagogical Aspects of Computational Thinking

    ERIC Educational Resources Information Center

    National Academies Press, 2011

    2011-01-01

    In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking…

  13. Program design by a multidisciplinary team. [for structural finite element analysis on STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Voigt, S.

    1975-01-01

    The use of software engineering aids in the design of a structural finite-element analysis computer program for the STAR-100 computer is described. Nested functional diagrams to aid in communication among design team members were used, and a standardized specification format to describe modules designed by various members was adopted. This is a report of current work in which use of the functional diagrams provided continuity and helped resolve some of the problems arising in this long-running part-time project.

  14. Merlin 2 - A computer program to transfer solution data betwwen finite element meshes

    SciTech Connect

    Gartling, D.K.

    1991-07-01

    The MERLIN 2 program is designed to transfer data between finite element meshes of arbitrary geometry. The program is structured to accurately interpolate previously computed solutions onto a given mesh and format the resulting data for immediate use in another analysis program. Data from either two-dimensional or three-dimensional meshes may be considered. The theoretical basis and computational algorithms used in the program are described and complete user instructions are presented. Several example problems are included to demonstrate program usage. 13 refs. 15 figs.

  15. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  16. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  17. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  18. Computational micromechanical analysis of the representative volume element of bituminous composite materials

    NASA Astrophysics Data System (ADS)

    Ozer, Hasan; Ghauch, Ziad G.; Dhasmana, Heena; Al-Qadi, Imad L.

    2016-08-01

    Micromechanical computational modeling is used in this study to determine the smallest domain, or Representative Volume Element (RVE), that can be used to characterize the effective properties of composite materials such as Asphalt Concrete (AC). Computational Finite Element (FE) micromechanical modeling was coupled with digital image analysis of surface scans of AC specimens. Three mixtures with varying Nominal Maximum Aggregate Size (NMAS) of 4.75 mm, 12.5 mm, and 25 mm, were prepared for digital image analysis and computational micromechanical modeling. The effects of window size and phase modulus mismatch on the apparent viscoelastic response of the composite were numerically examined. A good agreement was observed in the RVE size predictions based on micromechanical computational modeling and image analysis. Micromechanical results indicated that a degradation in the matrix stiffness increases the corresponding RVE size. Statistical homogeneity was observed for window sizes equal to two to three times the NMAS. A model was presented for relating the degree of statistical homogeneity associated with each window size for materials with varying inclusion dimensions.

  19. Fiber pushout test: A three-dimensional finite element computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Chamis, Christos C.

    1990-01-01

    A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.

  20. Computation of consistent boundary quantities in finite element thermal-fluid solutions

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.

    1982-01-01

    The consistent boundary quantity method for computing derived quantities from finite element nodal variable solutions is investigated. The method calculates consistent, continuous boundary surface quantities such as heat fluxes, flow velocities, and surface tractions from nodal variables such as temperatures, velocity potentials, and displacements. Consistent and lumped coefficient matrix solutions for such problems are compared. The consistent approach may produce more accurate boundary quantities, but spurious oscillations may be produced in the vicinity of discontinuities. The uncoupled computations of the lumped approach provide greater flexibility in dealing with discontinuities and provide increased computational efficiency. The consistent boundary quantity approach can be applied to solution boundaries other than those with Dirichlet boundary conditions, and provides more accurate results than the customary method of differentiation of interpolation polynomials.

  1. Experimental and computational investigation of lift-enhancing tabs on a multi-element airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale

    1996-01-01

    An experimental and computational investigation of the effect of lift enhancing tabs on a two-element airfoil was conducted. The objective of the study was to develop an understanding of the flow physics associated with lift enhancing tabs on a multi-element airfoil. A NACA 63(sub 2)-215 ModB airfoil with a 30 percent chord Fowler flap was tested in the NASA Ames 7 by 10 foot wind tunnel. Lift enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computer results predict all of the trends in the experimental data quite well. When the flow over the flap upper surface is attached, tabs mounted at the main element trailing edge (cove tabs) produce very little change in lift. At high flap deflections. however, the flow over the flap is separated and cove tabs produce large increases in lift and corresponding reductions in drag by eliminating the separated flow. Cove tabs permit high flap deflection angles to be achieved and reduce the sensitivity of the airfoil lift to the size of the flap gap. Tabs attached to the flap training edge (flap tabs) are effective at increasing lift without significantly increasing drag. A combination of a cove tab and a flap tab increased the airfoil lift coefficient by 11 percent relative to the highest lift tab coefficient achieved by any baseline configuration at an angle of attack of zero percent and the maximum lift coefficient was increased by more than 3 percent. A simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift enhancing tabs work. The tabs were modeled by a point vortex at the training edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift enhancing tabs on a multi-element airfoil. Results of the modeling

  2. Computations of Disturbance Amplification Behind Isolated Roughness Elements and Comparison with Measurements

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Li, Fei; Bynum, Michael; Kegerise, Michael; King, Rudolph

    2015-01-01

    Computations are performed to study laminar-turbulent transition due to isolated roughness elements in boundary layers at Mach 3.5 and 5.95, with an emphasis on flow configurations for which experimental measurements from low disturbance wind tunnels are available. The Mach 3.5 case corresponds to a roughness element with right-triangle planform with hypotenuse that is inclined at 45 degrees with respect to the oncoming stream, presenting an obstacle with spanwise asymmetry. The Mach 5.95 case corresponds to a circular roughness element along the nozzle wall of the Purdue BAMQT wind tunnel facility. In both cases, the mean flow distortion due to the roughness element is characterized by long-lived streamwise streaks in the roughness wake, which can support instability modes that did not exist in the absence of the roughness element. The linear amplification characteristics of the wake flow are examined towards the eventual goal of developing linear growth correlations for the onset of transition.

  3. Computational Analysis of Enhanced Magnetic Bioseparation in Microfluidic Systems with Flow-Invasive Magnetic Elements

    PubMed Central

    Khashan, S. A.; Alazzam, A.; Furlani, E. P.

    2014-01-01

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437

  4. Computational analysis of enhanced magnetic bioseparation in microfluidic systems with flow-invasive magnetic elements.

    PubMed

    Khashan, S A; Alazzam, A; Furlani, E P

    2014-01-01

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437

  5. Methodological aspects of using IBM and macintosh PC'S for computational experiments in the physics practicum

    NASA Astrophysics Data System (ADS)

    Starodubtsev, V. A.; Malyutin, V. M.; Chernov, I. P.

    1996-07-01

    This article considers attempts to develop and use, in the teaching process, computer-laboratory work performed by students in ternimal-based classes. We describe the methodological features of the LABPK1 and LABPK2 programs, which are intended for use on local networks using 386/286 IBM PC compatibles or Macintosh computers.

  6. Computation of dynamic stress intensity factors using the boundary element method based on Laplace transform and regularized boundary integral equations

    NASA Astrophysics Data System (ADS)

    Tanaka, Masataka; Nakamura, Masayuki; Aoki, Kazuhiko; Matsumoto, Toshiro

    1993-07-01

    This paper presents a computational method of dynamic stress intensity factors (DSIF) in two-dimensional problems. In order to obtain accurate numerical results of DSIF, the boundary element method based on the Laplace transform and regularized boundary integral equations is applied to the computation of transient elastodynamic responses. A computer program is newly developed for two-dimensional elastodynamics. Numerical computation of DSIF is carried out for a rectangular plate with a center crack under impact tension. Accuracy of the results is investigated from the viewpoint of computational conditions such as the number of sampling points of the inverse Laplace transform and the number of boundary elements.

  7. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  8. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    SciTech Connect

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  9. Computer simulation analysis of fracture dislocation of the proximal interphalangeal joint using the finite element method.

    PubMed

    Akagi, T; Hashizume, H; Inoue, H; Ogura, T; Nagayama, N

    1994-10-01

    Stress is a proximal interphalangeal (PIP) joint model was analyzed by the two-dimensional and three-dimensional finite element methods (FEM) to study the onset mechanisms of the middle phalangeal base fracture. The structural shapes were obtained from sagittally sectioned specimens of the PIP joint for making FEM models. In those models, four different material properties were given corresponding to cortical bone, subchondral bone, cancellous bone and cartilage. Loading conditions were determined by estimating the amount and position of axial pressure added to the middle phalanx. A general finite element program (MARC) was used for computer simulation analysis. The results of the fracture experiments compared with the clinical manifestation of the fractures justify the applicability of the computer simulation models using FEM analysis. The stress distribution changed as the angle of the PIP joint changed. Concentrated stress was found on the volar side of the middle phalangeal base in the hyperextension position, and was found on the dorsal side in the flexion position. In the neutral position, the stress was found on both sides. Axial stress on the middle phalanx causes three different types of fractures (volar, dorsal and both) depending upon the angle of the PIP joint. These results demonstrate that the type of PIP joint fracture dislocation depends on the angle of the joint at the time of injury. The finite element method is one of the most useful methods for analyzing the onset mechanism of fractures.

  10. Problem Solving and Computational Skill: Are They Shared or Distinct Aspects of Mathematical Cognition?

    PubMed

    Fuchs, Lynn S; Fuchs, Douglas; Hamlett, Carol L; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M

    2008-02-01

    The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed.

  11. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 2

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    The Control/Structures Integration Program, a survey of available software for control of flexible structures, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software are discussed.

  12. Computation of canonical correlation and best predictable aspect of future for time series

    NASA Technical Reports Server (NTRS)

    Pourahmadi, Mohsen; Miamee, A. G.

    1989-01-01

    The canonical correlation between the (infinite) past and future of a stationary time series is shown to be the limit of the canonical correlation between the (infinite) past and (finite) future, and computation of the latter is reduced to a (generalized) eigenvalue problem involving (finite) matrices. This provides a convenient and essentially, finite-dimensional algorithm for computing canonical correlations and components of a time series. An upper bound is conjectured for the largest canonical correlation.

  13. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  14. Computing the Average Square: An Agent-Based Introduction to Aspects of Current Psychometric Practice

    ERIC Educational Resources Information Center

    Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe

    2011-01-01

    This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…

  15. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  16. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  17. Quantitative Computed Tomography Protocols Affect Material Mapping and Quantitative Computed Tomography-Based Finite-Element Analysis Predicted Stiffness.

    PubMed

    Giambini, Hugo; Dragomir-Daescu, Dan; Nassr, Ahmad; Yaszemski, Michael J; Zhao, Chunfeng

    2016-09-01

    Quantitative computed tomography-based finite-element analysis (QCT/FEA) has become increasingly popular in an attempt to understand and possibly reduce vertebral fracture risk. It is known that scanning acquisition settings affect Hounsfield units (HU) of the CT voxels. Material properties assignments in QCT/FEA, relating HU to Young's modulus, are performed by applying empirical equations. The purpose of this study was to evaluate the effect of QCT scanning protocols on predicted stiffness values from finite-element models. One fresh frozen cadaveric torso and a QCT calibration phantom were scanned six times varying voltage and current and reconstructed to obtain a total of 12 sets of images. Five vertebrae from the torso were experimentally tested to obtain stiffness values. QCT/FEA models of the five vertebrae were developed for the 12 image data resulting in a total of 60 models. Predicted stiffness was compared to the experimental values. The highest percent difference in stiffness was approximately 480% (80 kVp, 110 mAs, U70), while the lowest outcome was ∼1% (80 kVp, 110 mAs, U30). There was a clear distinction between reconstruction kernels in predicted outcomes, whereas voltage did not present a clear influence on results. The potential of QCT/FEA as an improvement to conventional fracture risk prediction tools is well established. However, it is important to establish research protocols that can lead to results that can be translated to the clinical setting. PMID:27428281

  18. Computer modeling of single-cell and multicell thermionic fuel elements

    SciTech Connect

    Dickinson, J.W.; Klein, A.C.

    1996-05-01

    Modeling efforts are undertaken to perform coupled thermal-hydraulic and thermionic analysis for both single-cell and multicell thermionic fuel elements (TFE). The analysis--and the resulting MCTFE computer code (multicell thermionic fuel element)--is a steady-state finite volume model specifically designed to analyze cylindrical TFEs. It employs an interactive successive overrelaxation solution technique to solve for the temperatures throughout the TFE and a coupled thermionic routine to determine the total TFE performance. The calculated results include temperature distributions in all regions of the TFE, axial interelectrode voltages and current densities, and total TFE electrical output parameters including power, current, and voltage. MCTFE-generated results compare experimental data from the single-cell Topaz-II-type TFE and multicell data from the General Atomics 3H5 TFE to benchmark the accuracy of the code methods.

  19. Some aspects of optimal human-computer symbiosis in multisensor geospatial data fusion

    NASA Astrophysics Data System (ADS)

    Levin, E.; Sergeyev, A.

    Nowadays vast amount of the available geospatial data provides additional opportunities for the targeting accuracy increase due to possibility of geospatial data fusion. One of the most obvious operations is determining of the targets 3D shapes and geospatial positions based on overlapped 2D imagery and sensor modeling. 3D models allows for the extraction of such information about targets, which cannot be measured directly based on single non-fused imagery. Paper describes ongoing research effort at Michigan Tech attempting to combine advantages of human analysts and computer automated processing for efficient human computer symbiosis for geospatial data fusion. Specifically, capabilities provided by integration into geospatial targeting interfaces novel human-computer interaction method such as eye-tracking and EEG was explored. Paper describes research performed and results in more details.

  20. SAGUARO: a finite-element computer program for partially saturated porous flow problems

    SciTech Connect

    Eaton, R.R.; Gartling, D.K.; Larson, D.E.

    1983-06-01

    SAGUARO is a finite element computer program designed to calculate two-dimensional flow of mass and energy through porous media. The media may be saturated or partially saturated. SAGUARO solves the parabolic time-dependent mass transport equation which accounts for the presence of partially saturated zones through the use of highly non-linear material characteristic curves. The energy equation accounts for the possibility of partially saturated regions by adjusting the thermal capacitances and thermal conductivities according to the volume fraction of water present in the local pores. Program capabilities, user instructions and a sample problem are presented in this manual.

  1. SAGUARO: A finite-element computer program for partially saturated porous flow problems

    NASA Astrophysics Data System (ADS)

    Easton, R. R.; Gartling, D. K.; Larson, D. E.

    1983-11-01

    SAGUARO is finite element computer program designed to calculate two-dimensional flow of mass and energy through porous media. The media may be saturated or partially saturated. SAGUARO solves the parabolic time-dependent mass transport equation which accounts for the presence of partially saturated zones through the use of highly non-linear material characteristic curves. The energy equation accounts for the possibility of partially saturated regions by adjusting the thermal capacitances and thermal conductivities according to the volume fraction of water present in the local pores. Program capabilities, user instructions and a sample problem are presented in this manual.

  2. Computing ferrite core losses at high frequency by finite elements method including temperature influence

    SciTech Connect

    Ahmed, B.; Ahmad, J.; Guy, G.

    1994-09-01

    A finite elements method coupled with the Preisach model of hysteresis is used to compute-the ferrite losses in medium power transformers (10--60 kVA) working at relatively high frequencies (20--60 kHz) and with an excitation level of about 0.3 Tesla. The dynamic evolution of the permeability is taken into account. The simple and doubly cubic spline functions are used to account for temperature effects respectively on electric and on magnetic parameters of the ferrite cores. The results are compared with test data obtained with 3C8 and B50 ferrites at different frequencies.

  3. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.; Kutler, Paul (Technical Monitor)

    1994-01-01

    In recent years significant advances have been made for parallel computers in both hardware and software. Now parallel computers have become viable tools in computational mechanics. Many application codes developed on conventional computers have been modified to benefit from parallel computers. Significant speedups in some areas have been achieved by parallel computations. For single-discipline use of both fluid dynamics and structural dynamics, computations have been made on wing-body configurations using parallel computers. However, only a limited amount of work has been completed in combining these two disciplines for multidisciplinary applications. The prime reason is the increased level of complication associated with a multidisciplinary approach. In this work, procedures to compute aeroelasticity on parallel computers using direct coupling of fluid and structural equations will be investigated for wing-body configurations. The parallel computer selected for computations is an Intel iPSC/860 computer which is a distributed-memory, multiple-instruction, multiple data (MIMD) computer with 128 processors. In this study, the computational efficiency issues of parallel integration of both fluid and structural equations will be investigated in detail. The fluid and structural domains will be modeled using finite-difference and finite-element approaches, respectively. Results from the parallel computer will be compared with those from the conventional computers using a single processor. This study will provide an efficient computational tool for the aeroelastic analysis of wing-body structures on MIMD type parallel computers.

  4. US-Latin American Workshop on Molecular and Materials Sciences: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Micha, David A.

    1994-08-01

    Partial contents include: time-dependent theory of photoabsorption processes; molecular simulation of a chemical reaction in supercritical water; many-body methods for electron correlation; conformational studies of PAF and PAF-antagonists; electric properties of atomic anions; theoretical interpretation of the Li4(-) spectrum using path integrals and ab initio methods; energy levels and structure of tetra-atomic van der Vaals clusters; technology for modern computational science: the John Slater Computing Facility; carbohydrates on the stabilization of biological structures: molecular dynamics simulation; the role of quantum chemistry in heterogeneous catalysis; and corrections to the Born-Oppenheimer approximation by means of perturbation theory.

  5. Symbolic algorithms for the computation of Moshinsky brackets and nuclear matrix elements

    NASA Astrophysics Data System (ADS)

    Ursescu, D.; Tomaselli, M.; Kuehl, T.; Fritzsche, S.

    2005-12-01

    To facilitate the use of the extended nuclear shell model (NSM), a FERMI module for calculating some of its basic quantities in the framework of MAPLE is provided. The Moshinsky brackets, the matrix elements for several central and non-central interactions between nuclear two-particle states as well as their expansion in terms of Talmi integrals are easily given within a symbolic formulation. All of these quantities are available for interactive work. Program summaryTitle of program:Fermi Catalogue identifier:ADVO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVO Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:None Computer for which the program is designed and others on which is has been tested:All computers with a licence for the computer algebra package MAPLE [Maple is a registered trademark of Waterloo Maple Inc., produced by MapleSoft division of Waterloo Maple Inc.] Instalations:GSI-Darmstadt; University of Kassel (Germany) Operating systems or monitors under which the program has beentested: WindowsXP, Linux 2.4 Programming language used:MAPLE 8 and 9.5 from MapleSoft division of Waterloo Maple Inc. Memory required to execute with typical data:30 MB No. of lines in distributed program including test data etc.:5742 No. of bytes in distributed program including test data etc.:288 939 Distribution program:tar.gz Nature of the physical problem:In order to perform calculations within the nuclear shell model (NSM), a quick and reliable access to the nuclear matrix elements is required. These matrix elements, which arise from various types of forces among the nucleons, can be calculated using Moshinsky's transformation brackets between relative and center-of-mass coordinates [T.A. Brody, M. Moshinsky, Tables of Transformation Brackets, Monografias del Instituto de Fisica, Universidad Nacional Autonoma de Mexico, 1960] and by the proper use of the nuclear states in different coupling notations

  6. Computing interaural differences through finite element modeling of idealized human heads

    PubMed Central

    Cai, Tingli; Rakerd, Brad; Hartmann, William M.

    2015-01-01

    Acoustical interaural differences were computed for a succession of idealized shapes approximating the human head-related anatomy: sphere, ellipsoid, and ellipsoid with neck and torso. Calculations were done as a function of frequency (100–2500 Hz) and for source azimuths from 10 to 90 degrees using finite element models. The computations were compared to free-field measurements made with a manikin. Compared to a spherical head, the ellipsoid produced greater large-scale variation with frequency in both interaural time differences and interaural level differences, resulting in better agreement with the measurements. Adding a torso, represented either as a large plate or as a rectangular box below the neck, further improved the agreement by adding smaller-scale frequency variation. The comparisons permitted conjectures about the relationship between details of interaural differences and gross features of the human anatomy, such as the height of the head, and length of the neck. PMID:26428792

  7. Parallel Computations of Natural Convection Flow in a Tall Cavity Using an Explicit Finite Element Method

    SciTech Connect

    Dunn, T.A.; McCallen, R.C.

    2000-10-17

    The Galerkin Finite Element Method was used to predict a natural convection flow in an enclosed cavity. The problem considered was a differentially heated, tall (8:1), rectangular cavity with a Rayleigh number of 3.4 x 10{sup 5} and Prandtl number of 0.71. The incompressible Navier-Stokes equations were solved using a Boussinesq approximation for the buoyancy force. The algorithm was developed for efficient use on massively parallel computer systems. Emphasis was on time-accurate simulations. It was found that the average temperature and velocity values can be captured with a relatively coarse grid, while the oscillation amplitude and period appear to be grid sensitive and require a refined computation.

  8. Computer-Based Exercises To Supplement the Teaching of Stereochemical Aspects of Drug Action.

    ERIC Educational Resources Information Center

    Harrold, Marc W.

    1995-01-01

    At the Duquesne University (PA) school of pharmacy, five self-paced computer exercises using a molecular modeling program have been implemented to teach stereochemical concepts. The approach, designed for small-group learning, has been well received and found effective in enhancing students' understanding of the concepts. (Author/MSE)

  9. Aspects of implementing constant traction boundary conditions in computational homogenization via semi-Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Javili, A.; Saeb, S.; Steinmann, P.

    2016-10-01

    In the past decades computational homogenization has proven to be a powerful strategy to compute the overall response of continua. Central to computational homogenization is the Hill-Mandel condition. The Hill-Mandel condition is fulfilled via imposing displacement boundary conditions (DBC), periodic boundary conditions (PBC) or traction boundary conditions (TBC) collectively referred to as canonical boundary conditions. While DBC and PBC are widely implemented, TBC remains poorly understood, with a few exceptions. The main issue with TBC is the singularity of the stiffness matrix due to rigid body motions. The objective of this manuscript is to propose a generic strategy to implement TBC in the context of computational homogenization at finite strains. To eliminate rigid body motions, we introduce the concept of semi-Dirichlet boundary conditions. Semi-Dirichlet boundary conditions are non-homogeneous Dirichlet-type constraints that simultaneously satisfy the Neumann-type conditions. A key feature of the proposed methodology is its applicability for both strain-driven as well as stress-driven homogenization. The performance of the proposed scheme is demonstrated via a series of numerical examples.

  10. Tying Theory To Practice: Cognitive Aspects of Computer Interaction in the Design Process.

    ERIC Educational Resources Information Center

    Mikovec, Amy E.; Dake, Dennis M.

    The new medium of computer-aided design requires changes to the creative problem-solving methodologies typically employed in the development of new visual designs. Most theoretical models of creative problem-solving suggest a linear progression from preparation and incubation to some type of evaluative study of the "inspiration." These models give…

  11. A comparison of turbulence models in computing multi-element airfoil flows

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.

    1994-01-01

    Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.

  12. Computational aspects of hot-wire identification of thermal conductivity and diffusivity under high temperature

    NASA Astrophysics Data System (ADS)

    Vala, Jiří; Jarošová, Petra

    2016-07-01

    Development of advanced materials resistant to high temperature, needed namely for the design of heat storage for low-energy and passive buildings, requires simple, inexpensive and reliable methods of identification of their temperature-sensitive thermal conductivity and diffusivity, covering both well-advised experimental setting and implementation of robust and effective computational algorithms. Special geometrical configurations offer a possibility of quasi-analytical evaluation of temperature development for direct problems, whereas inverse problems of simultaneous evaluation of thermal conductivity and diffusivity must be handled carefully, using some least-squares (minimum variance) arguments. This paper demonstrates the proper mathematical and computational approach to such model problem, thanks to the radial symmetry of hot-wire measurements, including its numerical implementation.

  13. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  14. 2nd International Symposium on Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering (REES-2015)

    NASA Astrophysics Data System (ADS)

    Tavadyan, Levon, Prof; Sachkov, Viktor, Prof; Godymchuk, Anna, Dr.; Bogdan, Anna

    2016-01-01

    The 2nd International Symposium «Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering» (REES2015) was jointly organized by Tomsk State University (Russia), National Academy of Science (Armenia), Shenyang Polytechnic University (China), Moscow Institute of Physics and Engineering (Russia), Siberian Physical-technical Institute (Russia), and Tomsk Polytechnic University (Russia) in September, 7-15, 2015, Belokuriha, Russia. The Symposium provided a high quality of presentations and gathered engineers, scientists, academicians, and young researchers working in the field of rare and rare earth elements mining, modification, separation, elaboration and application, in order to facilitate aggregation and sharing interests and results for a better collaboration and activity visibility. The goal of the REES2015 was to bring researchers and practitioners together to share the latest knowledge on rare and rare earth elements technologies. The Symposium was aimed at presenting new trends in rare and rare earth elements mining, research and separation and recent achievements in advanced materials elaboration and developments for different purposes, as well as strengthening the already existing contacts between manufactures, highly-qualified specialists and young scientists. The topics of the REES2015 were: (1) Problems of extraction and separation of rare and rare earth elements; (2) Methods and approaches to the separation and isolation of rare and rare earth elements with ultra-high purity; (3) Industrial technologies of production and separation of rare and rare earth elements; (4) Economic aspects in technology of rare and rare earth elements; and (5) Rare and rare earth based materials (application in metallurgy, catalysis, medicine, optoelectronics, etc.). We want to thank the Organizing Committee, the Universities and Sponsors supporting the Symposium, and everyone who contributed to the organization of the event and to

  15. Adaptive finite element simulation of flow and transport applications on parallel computers

    NASA Astrophysics Data System (ADS)

    Kirk, Benjamin Shelton

    The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the

  16. Computational Thermodynamics Modeling of Minor Element Distributions During Copper Flash Converting

    NASA Astrophysics Data System (ADS)

    Swinbourne, D. R.; Kho, T. S.

    2012-08-01

    Continuous copper converting processes are replacing traditional Peirce-Smith converters because they overcome most of the difficulties associated with this old batch technology. Most notably, they offer much improved environmental control of emissions. The Kennecott-Outotec flash converting process is attractive because it decouples smelting and converting, as well as offers high levels of sulfur capture. The success of a copper smelter depends on the way it controls the many minor elements that enter with the concentrate feed, and an understanding of the factors that control minor element distributions is essential. In this work, a computational thermodynamics model of the flash converter was developed and validated against published performance data. It was then used to predict the distribution behavior of lead, arsenic, bismuth, and cadmium, and the results matched the published data closely. It is suggested that the flash converter can be considered to approximate an equilibrium reactor and that minor elements distribute between the phases in a way that depends mostly on their thermodynamic properties.

  17. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  18. Delta: An object-oriented finite element code architecture for massively parallel computers

    SciTech Connect

    Weatherby, J.R.; Schutt, J.A.; Peery, J.S.; Hogan, R.E.

    1996-02-01

    Delta is an object-oriented code architecture based on the finite element method which enables simulation of a wide range of engineering mechanics problems in a parallel processing environment. Written in C{sup ++}, Delta is a natural framework for algorithm development and for research involving coupling of mechanics from different Engineering Science disciplines. To enhance flexibility and encourage code reuse, the architecture provides a clean separation of the major aspects of finite element programming. Spatial discretization, temporal discretization, and the solution of linear and nonlinear systems of equations are each implemented separately, independent from the governing field equations. Other attractive features of the Delta architecture include support for constitutive models with internal variables, reusable ``matrix-free`` equation solvers, and support for region-to-region variations in the governing equations and the active degrees of freedom. A demonstration code built from the Delta architecture has been used in two-dimensional and three-dimensional simulations involving dynamic and quasi-static solid mechanics, transient and steady heat transport, and flow in porous media.

  19. Methodological aspects of in vitro assessment of bio-accessible risk element pool in urban particulate matter.

    PubMed

    Sysalová, Jiřina; Száková, Jiřina; Tremlová, Jana; Kašparovská, Kateřina; Kotlík, Bohumil; Tlustoš, Pavel; Svoboda, Petr

    2014-11-01

    In vitro tests simulating the elements release from inhaled urban particulate matter (PM) with artificial lung fluids (Gamble's and Hatch's solutions) and simulated gastric and pancreatic solutions were applied for an estimation of hazardous element (As, Cd, Cr, Hg, Mn, Ni, Pb and Zn) bio-accessibility in this material. An inductively coupled plasma optical emission spectrometry (ICP-OES) and an inductively coupled plasma mass spectrometry (ICP-MS) were employed for the element determination in extracted solutions. The effect of the extraction agent used, extraction time, sample-to-extractant ratio, sample particle size and/or individual element properties was evaluated. Different patterns of individual elements were observed, comparing Hatch's solution vs. simulated gastric and pancreatic solutions. For Hatch's solution, a decreasing sample-to-extractant ratio in a PM size fraction of <0.063 mm resulted in increasing leached contents of all investigated elements. As already proved for other operationally defined extraction procedures, the extractable element portions are affected not only by their mobility in the particulate matter itself but also by the sample preparation procedure. Results of simulated in vitro tests can be applied for the reasonable estimation of bio-accessible element portions in the particulate matter as an alternative method, which, consequently, initiates further examinations including potential in vivo assessments.

  20. Methodological aspects of in vitro assessment of bio-accessible risk element pool in urban particulate matter.

    PubMed

    Sysalová, Jiřina; Száková, Jiřina; Tremlová, Jana; Kašparovská, Kateřina; Kotlík, Bohumil; Tlustoš, Pavel; Svoboda, Petr

    2014-11-01

    In vitro tests simulating the elements release from inhaled urban particulate matter (PM) with artificial lung fluids (Gamble's and Hatch's solutions) and simulated gastric and pancreatic solutions were applied for an estimation of hazardous element (As, Cd, Cr, Hg, Mn, Ni, Pb and Zn) bio-accessibility in this material. An inductively coupled plasma optical emission spectrometry (ICP-OES) and an inductively coupled plasma mass spectrometry (ICP-MS) were employed for the element determination in extracted solutions. The effect of the extraction agent used, extraction time, sample-to-extractant ratio, sample particle size and/or individual element properties was evaluated. Different patterns of individual elements were observed, comparing Hatch's solution vs. simulated gastric and pancreatic solutions. For Hatch's solution, a decreasing sample-to-extractant ratio in a PM size fraction of <0.063 mm resulted in increasing leached contents of all investigated elements. As already proved for other operationally defined extraction procedures, the extractable element portions are affected not only by their mobility in the particulate matter itself but also by the sample preparation procedure. Results of simulated in vitro tests can be applied for the reasonable estimation of bio-accessible element portions in the particulate matter as an alternative method, which, consequently, initiates further examinations including potential in vivo assessments. PMID:25123460

  1. Computer-Aided Drug Design (CADD): Methodological Aspects and Practical Applications in Cancer Research

    NASA Astrophysics Data System (ADS)

    Gianti, Eleonora

    Computer-Aided Drug Design (CADD) has deservedly gained increasing popularity in modern drug discovery (Schneider, G.; Fechner, U. 2005), whether applied to academic basic research or the pharmaceutical industry pipeline. In this work, after reviewing theoretical advancements in CADD, we integrated novel and stateof- the-art methods to assist in the design of small-molecule inhibitors of current cancer drug targets, specifically: Androgen Receptor (AR), a nuclear hormone receptor required for carcinogenesis of Prostate Cancer (PCa); Signal Transducer and Activator of Transcription 5 (STAT5), implicated in PCa progression; and Epstein-Barr Nuclear Antigen-1 (EBNA1), essential to the Epstein Barr Virus (EBV) during latent infections. Androgen Receptor. With the aim of generating binding mode hypotheses for a class (Handratta, V.D. et al. 2005) of dual AR/CYP17 inhibitors (CYP17 is a key enzyme for androgens biosynthesis and therefore implicated in PCa development), we successfully implemented a receptor-based computational strategy based on flexible receptor docking (Gianti, E.; Zauhar, R.J. 2012). Then, with the ultimate goal of identifying novel AR binders, we performed Virtual Screening (VS) by Fragment-Based Shape Signatures, an improved version of the original method developed in our Laboratory (Zauhar, R.J. et al. 2003), and we used the results to fully assess the high-level performance of this innovative tool in computational chemistry. STAT5. The SRC Homology 2 (SH2) domain of STAT5 is responsible for phospho-peptide recognition and activation. As a keystone of Structure-Based Drug Design (SBDD), we characterized key residues responsible for binding. We also generated a model of STAT5 receptor bound to a phospho-peptide ligand, which was validated by docking publicly known STAT5 inhibitors. Then, we performed Shape Signatures- and docking-based VS of the ZINC database (zinc.docking.org), followed by Molecular Mechanics Generalized Born Surface Area (MMGBSA

  2. Physics and engineering aspects of cell and tissue imaging systems: microscopic devices and computer assisted diagnosis.

    PubMed

    Chen, Xiaodong; Ren, Liqiang; Zheng, Bin; Liu, Hong

    2013-01-01

    The conventional optical microscopes have been used widely in scientific research and in clinical practice. The modern digital microscopic devices combine the power of optical imaging and computerized analysis, archiving and communication techniques. It has a great potential in pathological examinations for improving the efficiency and accuracy of clinical diagnosis. This chapter reviews the basic optical principles of conventional microscopes, fluorescence microscopes and electron microscopes. The recent developments and future clinical applications of advanced digital microscopic imaging methods and computer assisted diagnosis schemes are also discussed.

  3. Computational aspects of real-time simulation of rotary-wing aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Houck, J. A.

    1976-01-01

    A study was conducted to determine the effects of degrading a rotating blade element rotor mathematical model suitable for real-time simulation of rotorcraft. Three methods of degradation were studied, reduction of number of blades, reduction of number of blade segments, and increasing the integration interval, which has the corresponding effect of increasing blade azimuthal advance angle. The three degradation methods were studied through static trim comparisons, total rotor force and moment comparisons, single blade force and moment comparisons over one complete revolution, and total vehicle dynamic response comparisons. Recommendations are made concerning model degradation which should serve as a guide for future users of this mathematical model, and in general, they are in order of minimum impact on model validity: (1) reduction of number of blade segments; (2) reduction of number of blades; and (3) increase of integration interval and azimuthal advance angle. Extreme limits are specified beyond which a different rotor mathematical model should be used.

  4. Computer simulations of particle-bubble interactions and particle sliding using Discrete Element Method.

    PubMed

    Maxwell, R; Ata, S; Wanless, E J; Moreno-Atanasio, R

    2012-09-01

    Three dimensional Discrete Element Method (DEM) computer simulations have been carried out to analyse the kinetics of collision of multiple particles against a stationary bubble and the sliding of the particles over the bubble surface. This is the first time that a computational analysis of the sliding time and particle packing arrangements of multiple particles on the surface of a bubble has been carried out. The collision kinetics of monodisperse (33 μm in radius) and polydisperse (12-33 μm in radius) particle systems have been analysed in terms of the time taken by 10%, 50% and 100% of the particles to collide against the bubble. The dependencies of these collision times on the strength of hydrophobic interactions follow relationships close to power laws. However, minimal sensitivity of the collision times to particle size was found when linear and square relationships of the hydrophobic force with particles radius were considered. The sliding time for single particles has corroborated published theoretical expressions. Finally, a good qualitative comparison with experiments has been observed with respect to the particle packing at the bottom of the bubble after sliding demonstrating the usefulness of computer simulations in the studies of particle-bubble systems.

  5. Computational analysis of noise reduction devices in axial fans with stabilized finite element formulations

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Sheard, A. G.; Tezduyar, T. E.

    2012-12-01

    The paper illustrates how a computational fluid mechanic technique, based on stabilized finite element formulations, can be used in analysis of noise reduction devices in axial fans. Among the noise control alternatives, the study focuses on the use of end-plates fitted at the blade tips to control the leakage flow and the related aeroacoustic sources. The end-plate shape is configured to govern the momentum transfer to the swirling flow at the blade tip. This flow control mechanism has been found to have a positive link to the fan aeroacoustics. The complex physics of the swirling flow at the tip, developing under the influence of the end-plate, is governed by the rolling up of the jet-like leakage flow. The RANS modelling used in the computations is based on the streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin methods, supplemented with the DRDJ stabilization. Judicious determination of the stabilization parameters involved is also a part of our computational technique and is described for each component of the stabilized formulation. We describe the flow physics underlying the design of the noise control device and illustrate the aerodynamic performance. Then we investigate the numerical performance of the formulation by analysing the inner workings of the stabilization operators and of their interaction with the turbulence model.

  6. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

    SciTech Connect

    Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.

    1996-09-01

    This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

  7. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  8. Use of SNP-arrays for ChIP assays: computational aspects.

    PubMed

    Muro, Enrique M; McCann, Jennifer A; Rudnicki, Michael A; Andrade-Navarro, Miguel A

    2009-01-01

    The simultaneous genotyping of thousands of single nucleotide polymorphisms (SNPs) in a genome using SNP-Arrays is a very important tool that is revolutionizing genetics and molecular biology. We expanded the utility of this technique by using it following chromatin immunoprecipitation (ChIP) to assess the multiple genomic locations protected by a protein complex recognized by an antibody. The power of this technique is illustrated through an analysis of the changes in histone H4 acetylation, a marker of open chromatin and transcriptionally active genomic regions, which occur during differentiation of human myoblasts into myotubes. The findings have been validated by the observation of a significant correlation between the detected histone modifications and the expression of the nearby genes, as measured by DNA expression microarrays. This chapter focuses on the computational analysis of the data.

  9. Computer Modelling of Functional Aspects of Noise in Endogenously Oscillating Neurons

    NASA Astrophysics Data System (ADS)

    Huber, M. T.; Dewald, M.; Voigt, K.; Braun, H. A.; Moss, F.

    1998-03-01

    Membrane potential oscillations are a widespread feature of neuronal activity. When such oscillations operate close to the spike-triggering threshold, noise can become an essential property of spike-generation. According to that, we developed a minimal Hodgkin-Huxley-type computer model which includes a noise term. This model accounts for experimental data from quite different cells ranging from mammalian cortical neurons to fish electroreceptors. With slight modifications of the parameters, the model's behavior can be tuned to bursting activity, which additionally allows it to mimick temperature encoding in peripheral cold receptors including transitions to apparently chaotic dynamics as indicated by methods for the detection of unstable periodic orbits. Under all conditions, cooperative effects between noise and nonlinear dynamics can be shown which, beyond stochastic resonance, might be of functional significance for stimulus encoding and neuromodulation.

  10. Computation of self-field hysteresis losses in conductors with helicoidal structure using a 2D finite element method

    NASA Astrophysics Data System (ADS)

    Stenvall, A.; Siahrang, M.; Grilli, F.; Sirois, F.

    2013-04-01

    It is well known that twisting current-carrying conductors helps to reduce their coupling losses. However, the impact of twisting on self-field hysteresis losses has not been as extensively investigated as that on the reduction of coupling losses. This is mostly because the reduction of coupling losses has been an important issue to tackle in the past, and it is not possible to consider twisting within the classical two-dimensional (2D) approaches for the computation of self-field hysteresis losses. Recently, numerical codes considering the effect of twisting in continuous symmetries have appeared. For general three-dimensional (3D) simulations, one issue is that no robust, widely accepted and easy to obtain model for expressing the relationship between the current density and the electric field is available. On the other hand, we can consider that in these helicoidal structures currents flow only along the helicoidal trajectories. This approach allows one to use the scalar power-law for superconductor resistivity and makes the eddy current approach to a solution of a hysteresis loss problem feasible. In this paper we use the finite element method to solve the eddy current model in helicoidal structures in 2D domains utilizing the helicoidal symmetry. The developed tool uses the full 3D geometry but allows discretization which takes advantage of the helicoidal symmetry to reduce the computational domain to a 2D one. We utilize in this tool the non-linear power law for modelling the resistivity in the superconducting regions and study how the self-field losses are influenced by the twisting of a 10-filament wire. Additionally, in the case of high aspect ratio tapes, we compare the results computed with the new tool and a one-dimensional program based on the integral equation method and developed for simulating single layer power cables made of ReBCO coated conductors. Finally, we discuss modelling issues and present open questions related to helicoidal structures

  11. A study of equation solvers for linear and non-linear finite element analysis on parallel processing computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Kamat, Manohar P.

    1992-01-01

    Concurrent computing environments provide the means to achieve very high performance for finite element analysis of systems, provided the algorithms take advantage of multiple processors. The authors have examined several algorithms for both linear and nonlinear finite element analysis. The performance of these algorithms on an Alliant FX/80 parallel supercomputer has been studied. For single load case linear analysis, the optimal solution algorithm is strongly problem dependent. For multiple load cases or nonlinear analysis through a modified Newton-Raphson method, decomposition algorithms are shown to have a decided advantage over element-by-element preconditioned conjugate gradient algorithms.

  12. Towards increased speed computations in 3D moving eddy current finite element modelling

    SciTech Connect

    Allen, N.; Rodger, D.; Coles, P.C.; Street, S.; Leonard, P.J.

    1995-11-01

    Attractive and drag forces on such devices as magnetically levitated (MAGLEV) vehicles and magnetic bearings are crucially dependent on induced eddy currents. Here, a finite element scheme used to model eddy current problems with motional velocity is described here. The formulation is a variation on the A {minus} {psi} method. An additional Minkowski-transformation term is required to take into account the velocity. However, computational instability arises when the velocity increases to the point that the first order velocity terms severely dominate the second order diffusion terms. The method presented here uses upwinding to help regain stability. An additional degree of stability is inserted at higher speeds by using a lower speed result as an initial vector. This leads to a reduced permeability in saturated regions which counter-balances to some extent the increase in velocity. The method is validated by experimental measurement.

  13. Preprocessor and postprocessor computer programs for a radial-flow finite-element model

    USGS Publications Warehouse

    Pucci, A.A.; Pope, D.A.

    1987-01-01

    Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)

  14. Computational aspects of the nonlinear normal mode initialization of the GLAS 4th order GCM

    NASA Technical Reports Server (NTRS)

    Navon, I. M.; Bloom, S. C.; Takacs, L.

    1984-01-01

    Using the normal modes of the GLAS 4th Order Model, a Machenhauer nonlinear normal mode initialization (NLNMI) was carried out for the external vertical mode using the GLAS 4th Order shallow water equations model for an equivalent depth corresponding to that associated with the external vertical mode. A simple procedure was devised which was directed at identifying computational modes by following the rate of increase of BAL sub M, the partial (with respect to the zonal wavenumber m) sum of squares of the time change of the normal mode coefficients (for fixed vertical mode index) varying over the latitude index L of symmetric or antisymmetric gravity waves. A working algorithm is presented which speeds up the convergence of the iterative Machenhauer NLNMI. A 24 h integration using the NLNMI state was carried out using both Matsuno and leap-frog time-integration schemes; these runs were then compared to a 24 h integration starting from a non-initialized state. The maximal impact of the nonlinear normal mode initialization was found to occur 6-10 hours after the initial time.

  15. RELATIONSHIP BETWEEN RIGIDITY OF EXTERNAL FIXATOR AND NUMBER OF PINS: COMPUTER ANALYSIS USING FINITE ELEMENTS

    PubMed Central

    Sternick, Marcelo Back; Dallacosta, Darlan; Bento, Daniela Águida; do Reis, Marcelo Lemos

    2015-01-01

    Objective: To analyze the rigidity of a platform-type external fixator assembly, according to different numbers of pins on each clamp. Methods: Computer simulation on a large-sized Cromus dynamic external fixator (Baumer SA) was performed using a finite element method, in accordance with the standard ASTM F1541. The models were generated with approximately 450,000 quadratic tetrahedral elements. Assemblies with two, three and four Schanz pins of 5.5 mm in diameter in each clamp were compared. Every model was subjected to a maximum force of 200 N, divided into 10 sub-steps. For the components, the behavior of the material was assumed to be linear, elastic, isotropic and homogeneous. For each model, the rigidity of the assembly and the Von Mises stress distribution were evaluated. Results: The rigidity of the system was 307.6 N/mm for two pins, 369.0 N/mm for three and 437.9 N/mm for four. Conclusion: The results showed that four Schanz pins in each clamp promoted rigidity that was 19% greater than in the configuration with three pins and 42% greater than with two pins. Higher tension occurred in configurations with fewer pins. In the models analyzed, the maximum tension occurred on the surface of the pin, close to the fixation area. PMID:27047879

  16. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  17. Predicting mouse vertebra strength with micro-computed tomography-derived finite element analysis

    PubMed Central

    Nyman, Jeffry S; Uppuganti, Sasidhar; Makowski, Alexander J; Rowland, Barbara J; Merkel, Alyssa R; Sterling, Julie A; Bredbenner, Todd L; Perrien, Daniel S

    2015-01-01

    As in clinical studies, finite element analysis (FEA) developed from computed tomography (CT) images of bones are useful in pre-clinical rodent studies assessing treatment effects on vertebral body (VB) strength. Since strength predictions from microCT-derived FEAs (μFEA) have not been validated against experimental measurements of mouse VB strength, a parametric analysis exploring material and failure definitions was performed to determine whether elastic μFEAs with linear failure criteria could reasonably assess VB strength in two studies, treatment and genetic, with differences in bone volume fraction between the control and the experimental groups. VBs were scanned with a 12-μm voxel size, and voxels were directly converted to 8-node, hexahedral elements. The coefficient of determination or R2 between predicted VB strength and experimental VB strength, as determined from compression tests, was 62.3% for the treatment study and 85.3% for the genetic study when using a homogenous tissue modulus (Et) of 18 GPa for all elements, a failure volume of 2%, and an equivalent failure strain of 0.007. The difference between prediction and measurement (that is, error) increased when lowering the failure volume to 0.1% or increasing it to 4%. Using inhomogeneous tissue density-specific moduli improved the R2 between predicted and experimental strength when compared with uniform Et=18 GPa. Also, the optimum failure volume is higher for the inhomogeneous than for the homogeneous material definition. Regardless of model assumptions, μFEA can assess differences in murine VB strength between experimental groups when the expected difference in strength is at least 20%. PMID:25908967

  18. Predicting mouse vertebra strength with micro-computed tomography-derived finite element analysis.

    PubMed

    Nyman, Jeffry S; Uppuganti, Sasidhar; Makowski, Alexander J; Rowland, Barbara J; Merkel, Alyssa R; Sterling, Julie A; Bredbenner, Todd L; Perrien, Daniel S

    2015-01-01

    As in clinical studies, finite element analysis (FEA) developed from computed tomography (CT) images of bones are useful in pre-clinical rodent studies assessing treatment effects on vertebral body (VB) strength. Since strength predictions from microCT-derived FEAs (μFEA) have not been validated against experimental measurements of mouse VB strength, a parametric analysis exploring material and failure definitions was performed to determine whether elastic μFEAs with linear failure criteria could reasonably assess VB strength in two studies, treatment and genetic, with differences in bone volume fraction between the control and the experimental groups. VBs were scanned with a 12-μm voxel size, and voxels were directly converted to 8-node, hexahedral elements. The coefficient of determination or R (2) between predicted VB strength and experimental VB strength, as determined from compression tests, was 62.3% for the treatment study and 85.3% for the genetic study when using a homogenous tissue modulus (E t) of 18 GPa for all elements, a failure volume of 2%, and an equivalent failure strain of 0.007. The difference between prediction and measurement (that is, error) increased when lowering the failure volume to 0.1% or increasing it to 4%. Using inhomogeneous tissue density-specific moduli improved the R (2) between predicted and experimental strength when compared with uniform E t=18 GPa. Also, the optimum failure volume is higher for the inhomogeneous than for the homogeneous material definition. Regardless of model assumptions, μFEA can assess differences in murine VB strength between experimental groups when the expected difference in strength is at least 20%.

  19. Modeling aspects and computational methods for some recent problems of tomographic imaging

    NASA Astrophysics Data System (ADS)

    Allmaras, Moritz

    In this dissertation, two recent problems from tomographic imaging are studied, and results from numerical simulations with synthetic data are presented. The first part deals with ultrasound modulated optical tomography, a method for imaging interior optical properties of partially translucent media that combines optical contrast with ultrasound resolution. The primary application is the optical imaging of soft tissue, for which scattering and absorption rates contain important functional and structural information about the physiological state of tissue cells. We developed a mathematical model based on the diffusion approximation for photon propagation in highly scattering media. Simple reconstruction schemes for recovering optical absorption rates from boundary measurements with focused ultrasound are presented. We show numerical reconstructions from synthetic data generated for mathematical absorption phantoms. The results indicate that high resolution imaging with quantitatively correct values of absorption is possible. Synthetic focusing techniques are suggested that allow reconstruction from measurements with certain types of non-focused ultrasound signals. A preliminary stability analysis for a linearized model is given that provides an initial explanation for the observed stability of reconstruction. In the second part, backprojection schemes are proposed for the detection of small amounts of highly enriched nuclear material inside 3D volumes. These schemes rely on the geometrically singular structure that small radioactive sources represent, compared to natural background radiation. The details of the detection problem are explained, and two types of measurements, collimated and Compton-type measurements, are discussed. Computationally, we implemented backprojection by counting the number of particle trajectories intersecting each voxel of a regular rectangular grid covering the domain of detection. For collimated measurements, we derived confidence

  20. Mathematical and computational aspects of quaternary liquid mixing free energy measurement using light scattering.

    PubMed

    Wahle, Chris W; Ross, David S; Thurston, George M

    2012-07-21

    We provide a mathematical and computational analysis of light scattering measurement of mixing free energies of quaternary isotropic liquids. In previous work, we analyzed mathematical and experimental design considerations for the ternary mixture case [D. Ross, G. Thurston, and C. Lutzer, J. Chem. Phys. 129, 064106 (2008); C. Wahle, D. Ross, and G. Thurston, J. Chem. Phys. 137, 034201 (2012)]. Here, we review and introduce dimension-free general formulations of the fully nonlinear partial differential equation (PDE) and its linearization, a basis for applying the method to composition spaces of any dimension, in principle. With numerical analysis of the PDE as applied to the light scattering implied by a test free energy and dielectric gradient combination, we show that values of the Rayleigh ratio within the quaternary composition tetrahedron can be used to correctly reconstruct the composition dependence of the free energy. We then extend the analysis to the case of a finite number of data points, measured with noise. In this context the linearized PDE describes the relevant diffusion of information from light scattering noise to the free energy. The fully nonlinear PDE creates a special set of curves in the composition tetrahedron, collections of which form characteristics of the nonlinear and linear PDEs, and we show that the information diffusion has a time-like direction along the positive normals to these curves. With use of Monte Carlo simulations of light scattering experiments, we find that for a modest laboratory light scattering setup, about 100-200 samples and 100 s of measurement time are enough to be able to measure the mixing free energy over the entire quaternary composition tetrahedron, to within an L(2) error norm of 10(-3). The present method can help quantify thermodynamics of quaternary isotropic liquid mixtures.

  1. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  2. Optimized parallel computing for cellular automaton-finite element modeling of solidification grain structures

    NASA Astrophysics Data System (ADS)

    Carozzani, T.; Gandin, Ch-A.; Digonnet, H.

    2014-01-01

    A numerical implementation of a three-dimensional (3D) cellular automaton (CA)-finite element (FE) model has been developed for the prediction of solidification grain structures. For the first time, it relies on optimized parallel computation to solve industrial-scale problems (centimeter to meter long) while using a sufficiently small CA grid size to predict representative structures. Several algorithm modifications and strategies to maximize parallel efficiency are introduced. Improvements on a real case simulation are measured and discussed. The CA-FE implementation here is demonstrated using 32 computing units to predict grain structure in a 2.08 m × 0.382 m × 0.382 m ingot involving 4.9 billion cells and 1.6 million grains. These numerical improvements permit tracking of local changes in texture and grain size over real-cast parts while integrating interactions with macrosegregation, heat flow and fluid flow. Full 3D is essential in all these analyses, and can be dealt with successfully using the implementation presented here.

  3. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  4. mFES: A Robust Molecular Finite Element Solver for Electrostatic Energy Computations.

    PubMed

    Sakalli, I; Schöberl, J; Knapp, E W

    2014-11-11

    We present a robust method for the calculation of electrostatic potentials of large molecular systems using tetrahedral finite elements (FE). Compared to the finite difference (FD) method using a regular simple cubic grid to solve the Poisson equation, the FE method can reach high accuracy and efficiency using an adaptive grid. Here, the grid points can be adjusted and are placed directly on the molecular surfaces to faithfully model surfaces and volumes. The grid point density decreases rapidly toward the asymptotic boundary to reach very large distances with just a few more grid points. A broad set of tools are applied to make the grid more regular and thus provide a more stable linear equation system, while reducing the number of grid points without compromising accuracy. The latter reduces the number of unknowns significantly and yields shorter solver execution times. The accuracy is further enhanced by using second order polynomials as shape functions. Generating the adaptive grid for a molecular system is expensive, but it pays off, if the same molecular geometry is used several times as is the case for pKA and redox potential computations of many charge variable groups in proteins. Application of the mFES method is also advantageous, if the molecular system is too large to reach sufficient accuracy when computing the electrostatic potential with conventional FD methods. The program mFES is free of charge and available at http://agknapp.chemie.fu-berlin.de/mfes . PMID:26584389

  5. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.

    2016-05-01

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  6. mFES: A Robust Molecular Finite Element Solver for Electrostatic Energy Computations.

    PubMed

    Sakalli, I; Schöberl, J; Knapp, E W

    2014-11-11

    We present a robust method for the calculation of electrostatic potentials of large molecular systems using tetrahedral finite elements (FE). Compared to the finite difference (FD) method using a regular simple cubic grid to solve the Poisson equation, the FE method can reach high accuracy and efficiency using an adaptive grid. Here, the grid points can be adjusted and are placed directly on the molecular surfaces to faithfully model surfaces and volumes. The grid point density decreases rapidly toward the asymptotic boundary to reach very large distances with just a few more grid points. A broad set of tools are applied to make the grid more regular and thus provide a more stable linear equation system, while reducing the number of grid points without compromising accuracy. The latter reduces the number of unknowns significantly and yields shorter solver execution times. The accuracy is further enhanced by using second order polynomials as shape functions. Generating the adaptive grid for a molecular system is expensive, but it pays off, if the same molecular geometry is used several times as is the case for pKA and redox potential computations of many charge variable groups in proteins. Application of the mFES method is also advantageous, if the molecular system is too large to reach sufficient accuracy when computing the electrostatic potential with conventional FD methods. The program mFES is free of charge and available at http://agknapp.chemie.fu-berlin.de/mfes .

  7. Computational modeling of chemo-electro-mechanical coupling: A novel implicit monolithic finite element approach

    PubMed Central

    Wong, J.; Göktepe, S.; Kuhl, E.

    2014-01-01

    Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328

  8. COYOTE : a finite element computer program for nonlinear heat conduction problems. Part I, theoretical background.

    SciTech Connect

    Glass, Micheal W.; Hogan, Roy E., Jr.; Gartling, David K.

    2010-03-01

    The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction problems

  9. Linear Algebra Aspects in the Equilibrium-Based Implementation of Finite/Boundary Element Methods for FGMs

    NASA Astrophysics Data System (ADS)

    Dumont, Ney Augusto

    2008-02-01

    The paper briefly outlines the conventional and three variational implementations of the boundary element method, pointing out the conceptual imbrications of their constituent matrices. The nature of fundamental solutions is investigated in terms of the resulting matrix spectral properties, as applied to multiply-connected domains, reentering corners and FGMs.

  10. Effect of alloying elements on passivity and breakdown of passivity of Fe- and Ni-based alloys mechanistics aspects

    SciTech Connect

    Szklarska-Amialowska, Z.

    1992-06-01

    On the basis of the literature data and the current results, the mechanism of pitting corrosion of Al-alloys is proposed. An assumption is made that the transport of Cl- ions through defects in the passive film of aluminum an aluminum alloys is not a rate determining step in pitting. The pit development is controlled by the solubility of the oxidized alloying elements in acid solutions. A very good correlation was found between the pitting potential and the oxidized alloying elements for metastable Al-Cr, Al-Zr, Al-W, and Al-Zn alloys. We expect that the effect of oxidized alloying elements in other passive alloys will be the same as in Al-alloys. To verify this hypothesis, susceptibility to pitting in the function of alloying elements in the binary alloys and the composition of the oxide film has to be measured. We propose studying Fe- and Ni-alloys produced by a sputtering deposition method. Using this method one-phaseous alloy can be obtained, even when the two metals are immiscible using conventional methods. Another advantage to studying sputtered alloys is to find new materials with superior resistance to localized corrosion.

  11. Computation of the velocity field and mass balance in the finite-element modeling of groundwater flow

    SciTech Connect

    Yeh, G. T.

    1980-01-01

    Darcian velocity has been conventionally calculated in the finite-element modeling of groundwater flow by taking the derivatives of the computed pressure field. This results in discontinuities in the velocity field at nodal points and element boundaries. Discontinuities become enormous when the computed pressure field is far from a linear distribution. It is proposed in this paper that the finite element procedure that is used to simulate the pressure field or the moisture content field also be applied to Darcy's law with the derivatives of the computed pressure field as the load function. The problem of discontinuity is then eliminated, and the error of mass balance over the region of interest is much reduced. The reduction is from 23.8 to 2.2% by one numerical scheme and from 29.7 to -3.6% by another for a transient problem.

  12. Analysis of strength and failure pattern of human proximal femur using quantitative computed tomography (QCT)-based finite element method.

    PubMed

    Mirzaei, Majid; Keshavarzian, Maziyar; Naeini, Vahid

    2014-07-01

    This paper presents a novel method for fast and reliable prediction of the failure strength of human proximal femur, using the quantitative computed tomography (QCT)-based linear finite element analysis (FEA). Ten fresh frozen human femora (age: 34±16) were QCT-scanned and the pertinent 3D voxel-based finite element models were constructed. A specially-designed holding frame was used to define and maintain a unique geometrical reference system for both FEA and in-vitro mechanical testing. The analyses and tests were carried out at 8 different loading orientations. A new scheme was developed for assortment of the element risk factor (defined as the ratio of the strain energy density to the yield strain energy for each element) and implemented for the prediction of the failure strength. The predicted and observed failure patterns were in correspondence, and the FEA predictions of the failure loads were in very good agreement with the experimental results (R2=0.86, slope=0.96, p<0.01). The average computational time was 5 min (on a regular desktop personal computer) for an average element number of 197,000. Noting that the run-time for a similar nonlinear model is about 8h, it was concluded that the proposed linear scheme is overwhelmingly efficient in terms of computational costs. Thus, it can efficiently be used to predict the femoral failure strength with the same accuracy of similar nonlinear models. PMID:24735974

  13. Verification of a non-hydrostatic dynamical core using horizontally spectral element vertically finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-06-01

    The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.

  14. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  15. Using Finite Volume Element Definitions to Compute the Gravitation of Irregular Small Bodies

    NASA Astrophysics Data System (ADS)

    Zhao, Y. H.; Hu, S. C.; Wang, S.; Ji, J. H.

    2015-03-01

    In the orbit design procedure of the small bodies exploration missions, it's important to take the effect of the gravitation of the small bodies into account. However, a majority of the small bodies in the solar system are irregularly shaped with non-uniform density distribution which makes it difficult to precisely calculate the gravitation of these bodies. This paper proposes a method to model the gravitational field of an irregularly shaped small body and calculate the corresponding spherical harmonic coefficients. This method is based on the shape of the small bodies resulted from the light curve data via observation, and uses finite volume element to approximate the body shape. The spherical harmonic parameters could be derived numerically by computing the integrals according to their definition. Comparison with the polyhedral method is shown in our works. We take the asteroid (433) Eros as an example. Spherical harmonic coefficients resulted from this method are compared with the results derived from the track data obtained by NEAR (Near-Earth Asteroid Rendezvous) detector. The comparison shows that the error of C_{20} is less than 2%. The spherical harmonic coefficients of (1996) FG3 which is a selected target in our future exploration mission are computed. Taking (4179) Toutatis, the target body in Chang'e 2's flyby mission, for example, the gravitational field is calculated combined with the shape model from radar data, which provides theoretical basis for analyzing the soil distribution and flow from the optical image obtained in the mission. This method is applied to uneven density distribution objects, and could be used to provide reliable gravity field data of small bodies for orbit design and landing in the future exploration missions.

  16. Three dimensional automatic refinement method for transient small strain elastoplastic finite element computations

    NASA Astrophysics Data System (ADS)

    Biotteau, E.; Gravouil, A.; Lubrecht, A. A.; Combescure, A.

    2012-01-01

    In this paper, the refinement strategy based on the "Non-Linear Localized Full MultiGrid" solver originally published in Int. J. Numer. Meth. Engng 84(8):947-971 (2010) for 2-D structural problems is extended to 3-D simulations. In this context, some extra information concerning the refinement strategy and the behavior of the error indicators are given. The adaptive strategy is dedicated to the accurate modeling of elastoplastic materials with isotropic hardening in transient dynamics. A multigrid solver with local mesh refinement is used to reduce the amount of computational work needed to achieve an accurate calculation at each time step. The locally refined grids are automatically constructed, depending on the user prescribed accuracy. The discretization error is estimated by a dedicated error indicator within the multigrid method. In contrast to other adaptive procedures, where grids are erased when new ones are generated, the previous solutions are used recursively to reduce the computing time on the new mesh. Moreover, the adaptive strategy needs no costly coarsening method as the mesh is reassessed at each time step. The multigrid strategy improves the convergence rate of the non-linear solver while ensuring the information transfer between the different meshes. It accounts for the influence of localized non-linearities on the whole structure. All the steps needed to achieve the adaptive strategy are automatically performed within the solver such that the calculation does not depend on user experience. This paper presents three-dimensional results using the adaptive multigrid strategy on elastoplastic structures in transient dynamics and in a linear geometrical framework. Isoparametric cubic elements with energy and plastic work error indicators are used during the calculation.

  17. Computational optical palpation: micro-scale force mapping using finite-element methods (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wijesinghe, Philip; Sampson, David D.; Kennedy, Brendan F.

    2016-03-01

    Accurate quantification of forces, applied to, or generated by, tissue, is key to understanding many biomechanical processes, fabricating engineered tissues, and diagnosing diseases. Many techniques have been employed to measure forces; in particular, tactile imaging - developed to spatially map palpation-mimicking forces - has shown potential in improving the diagnosis of cancer on the macro-scale. However, tactile imaging often involves the use of discrete force sensors, such as capacitive or piezoelectric sensors, whose spatial resolution is often limited to 1-2 mm. Our group has previously presented a type of tactile imaging, termed optical palpation, in which the change in thickness of a compliant layer in contact with tissue is measured using optical coherence tomography, and surface forces are extracted, with a micro-scale spatial resolution, using a one-dimensional spring model. We have also recently combined optical palpation with compression optical coherence elastography (OCE) to quantify stiffness. A main limitation of this work, however, is that a one-dimensional spring model is insufficient in describing the deformation of mechanically heterogeneous tissue with uneven boundaries, generating significant inaccuracies in measured forces. Here, we present a computational, finite-element method, which we term computational optical palpation. In this technique, by knowing the non-linear mechanical properties of the layer, and from only the axial component of displacement measured by phase-sensitive OCE, we can estimate, not only the axial forces, but the three-dimensional traction forces at the layer-tissue interface. We use a non-linear, three-dimensional model of deformation, which greatly increases the ability to accurately measure force and stiffness in complex tissues.

  18. Applications of the Space-Time Conservation Element and Solution Element (CE/SE) Method to Computational Aeroacoustic Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.

  19. [Computational approaches for identification and classification of transposable elements in eukaryotic genomes].

    PubMed

    Xu, Hong-En; Zhang, Hua-Hao; Han, Min-Jin; Shen, Yi-Hong; Huang, Xian-Zhi; Xiang, Zhong-Huai; Zhang, Ze

    2012-08-01

    Repetitive sequences (repeats) represent a significant fraction of the eukaryotic genomes and can be divided into tandem repeats, segmental duplications, and interspersed repeats on the basis of their sequence characteristics and how they are formed. Most interspersed repeats are derived from transposable elements (TEs). Eukaryotic TEs have been subdivided into two major classes according to the intermediate they use to move. The transposition and amplification of TEs have a great impact on the evolution of genes and the stability of genomes. However, identification and classification of TEs are complex and difficult due to the fact that their structure and classification are complex and diverse compared with those of other types of repeats. Here, we briefly introduced the function and classification of TEs, and summarized three different steps for identification, classification and annotation of TEs in eukaryotic genomes: (1) assembly of a repeat library, (2) repeat correction and classification, and (3) genome annotation. The existing computational approaches for each step were summarized and the advantages and disadvantages of the approaches were also highlighted in this review. To accurately identify, classify, and annotate the TEs in eukaryotic genomes requires combined methods. This review provides useful information for biologists who are not familiar with these approaches to find their way through the forest of programs.

  20. A Hybrid FPGA/Tilera Compute Element for Autonomous Hazard Detection and Navigation

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Werner, Robert A.; Carson, John M., III; Khanoyan, Garen; Stern, Ryan A.; Trawny, Nikolas

    2013-01-01

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  1. CAST2D: A finite element computer code for casting process modeling

    SciTech Connect

    Shapiro, A.B.; Hallquist, J.O.

    1991-10-01

    CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).

  2. Custom Vector Graphic Viewers for Analysis of Element Production Calculations in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a set of extendable, cross-platform tools and interfaces using Java and vector graphic technologies to facilitate element production calculations in computational astrophysics. The Java technologies are customizable and portable, and can be utilized as stand-alone applications or distributed across a network. These tools, which have broad applications in general scientific visualization, has been used to explore and analyze a large library of nuclear reaction rates and visualize results of explosive nucleosynthesis calculations. In this presentation, we discuss a new capability to export the results of such applications directly into the XML (Extensible Markup Language) format, which may then be displayed through custom vector graphic viewers. The vector graphic technologies employed here, namely the SWF format, offer the ability to view results in an interactive, scalable vector graphics format, which leads to a dramatic (ten-fold) reduction in visualization file sizes while maintaining high visual quality and interactive control. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  3. Java and Vector Graphics Tools for Element Production Calculations in Computational Astrophysics

    NASA Astrophysics Data System (ADS)

    Lingerfelt, Eric; McMahon, Erin; Hix, Raph; Guidry, Mike; Smith, Michael

    2002-08-01

    We are developing a set of extendable, cross-platform tools and interfaces using Java and vector technologies such as SVG and SWF to facilitate element production calculations in computational astrophysics. The Java technologies are customizable and portable, and can be utilized as a stand-alone application or distributed across a network. These tools, which can have a broad applications in general scientific visualization, are currently being used to explore and compare various reaction rates, set up and run explosive nucleosynthesis calculations, and visualize these results with compact, high quality vector graphics. The facilities for reading and plotting nuclear reaction rates and their components from a network or library permit the user to include new rates and adjust current ones. Setup and initialization of a nucleosynthesis calculation is through an intuitive graphical interface. Sophisticated visualization and graphical analysis tools offer the ability to view results in an interactive, scalable vector graphics format, which leads to a dramatic reduction in visualization file sizes while maintaining high visual quality and interactive control. The use of these tools for other applications will also be mentioned.

  4. Validation of finite element computations for the quantitative prediction of underwater noise from impact pile driving.

    PubMed

    Zampolli, Mario; Nijhof, Marten J J; de Jong, Christ A F; Ainslie, Michael A; Jansen, Erwin H W; Quesson, Benoit A J

    2013-01-01

    The acoustic radiation from a pile being driven into the sediment by a sequence of hammer strikes is studied with a linear, axisymmetric, structural acoustic frequency domain finite element model. Each hammer strike results in an impulsive sound that is emitted from the pile and then propagated in the shallow water waveguide. Measurements from accelerometers mounted on the head of a test pile and from hydrophones deployed in the water are used to validate the model results. Transfer functions between the force input at the top of the anvil and field quantities, such as acceleration components in the structure or pressure in the fluid, are computed with the model. These transfer functions are validated using accelerometer or hydrophone measurements to infer the structural forcing. A modeled hammer forcing pulse is used in the successive step to produce quantitative predictions of sound exposure at the hydrophones. The comparison between the model and the measurements shows that, although several simplifying assumptions were made, useful predictions of noise levels based on linear structural acoustic models are possible. In the final part of the paper, the model is used to characterize the pile as an acoustic radiator by analyzing the flow of acoustic energy.

  5. Validation of finite element computations for the quantitative prediction of underwater noise from impact pile driving.

    PubMed

    Zampolli, Mario; Nijhof, Marten J J; de Jong, Christ A F; Ainslie, Michael A; Jansen, Erwin H W; Quesson, Benoit A J

    2013-01-01

    The acoustic radiation from a pile being driven into the sediment by a sequence of hammer strikes is studied with a linear, axisymmetric, structural acoustic frequency domain finite element model. Each hammer strike results in an impulsive sound that is emitted from the pile and then propagated in the shallow water waveguide. Measurements from accelerometers mounted on the head of a test pile and from hydrophones deployed in the water are used to validate the model results. Transfer functions between the force input at the top of the anvil and field quantities, such as acceleration components in the structure or pressure in the fluid, are computed with the model. These transfer functions are validated using accelerometer or hydrophone measurements to infer the structural forcing. A modeled hammer forcing pulse is used in the successive step to produce quantitative predictions of sound exposure at the hydrophones. The comparison between the model and the measurements shows that, although several simplifying assumptions were made, useful predictions of noise levels based on linear structural acoustic models are possible. In the final part of the paper, the model is used to characterize the pile as an acoustic radiator by analyzing the flow of acoustic energy. PMID:23297884

  6. Lateral-torsional buckling analysis of I-beams using shell finite elements and nonlinear computation methods

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk; Kala, Jiří

    2012-09-01

    The paper deals with the influence of correlation length, of Gauss random field, and of yield strength of a hotrolled I-beam under bending on the ultimate load carrying capacity limit state. Load carrying capacity is an output random quantity depending on input random imperfections. Latin Hypercube Sampling Method is used for sampling simulation. Load carrying capacity is computed by the programme ANSYS using shell finite elements and nonlinear computation methods. The nonlinear FEM computation model takes into consideration the effect of lateral-torsional buckling on the ultimate limit state.

  7. Computer Security Systems Enable Access.

    ERIC Educational Resources Information Center

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  8. Numerical computation of transonic flows by finite-element and finite-difference methods

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Wellford, L. C.; Merkle, C. L.; Murman, E. M.

    1978-01-01

    Studies on applications of the finite element approach to transonic flow calculations are reported. Different discretization techniques of the differential equations and boundary conditions are compared. Finite element analogs of Murman's mixed type finite difference operators for small disturbance formulations were constructed and the time dependent approach (using finite differences in time and finite elements in space) was examined.

  9. A finite element method to compute three-dimensional equilibrium configurations of fluid membranes: Optimal parameterization, variational formulation and applications

    NASA Astrophysics Data System (ADS)

    Rangarajan, Ramsharan; Gao, Huajian

    2015-09-01

    We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.

  10. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1993-01-01

    This paper presents a procedure for computing the aeroelasticity of wing-body configurations on multiple-instruction, multiple-data (MIMD) parallel computers. In this procedure, fluids are modeled using Euler equations discretized by a finite difference method, and structures are modeled using finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. A parallel integration scheme is used to compute aeroelastic responses by solving the coupled fluid and structural equations concurrently while keeping modularity of each discipline. The present procedure is validated by computing the aeroelastic response of a wing and comparing with experiment. Aeroelastic computations are illustrated for a High Speed Civil Transport type wing-body configuration.

  11. The computational structural mechanics testbed generic structural-element processor manual

    NASA Technical Reports Server (NTRS)

    Stanley, Gary M.; Nour-Omid, Shahram

    1990-01-01

    The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).

  12. Positive and Negative Aspects of the IWB and Tablet Computers in the First Grade of Primary School: A Multiple-Perspective Approach

    ERIC Educational Resources Information Center

    Fekonja-Peklaj, Urška; Marjanovic-Umek, Ljubica

    2015-01-01

    The aim of this qualitative study was to evaluate the positive and negative aspects of the interactive whiteboard (IWB) and tablet computers use in the first grade of primary school from the perspectives of three groups of evaluators, namely the teachers, the pupils and an independent observer. The sample included three first grade classes with…

  13. A new submodelling technique for multi-scale finite element computation of electromagnetic fields: Application in bioelectromagnetism

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Complex multi-scale Finite Element (FE) analyses always involve high number of elements and therefore require very long time of computations. This is caused by the fact, that considered effects on smaller scales have greater influences on the whole model and larger scales. Thus, mesh density should be as high as required by the smallest scale factor. New submodelling routine has been developed to sufficiently decrease the time of computation without loss of accuracy for the whole solution. The presented approach allows manipulation of different mesh sizes on different scales and, therefore total optimization of mesh density on each scale and transfer results automatically between the meshes corresponding to respective scales of the whole model. Unlike classical submodelling routine, the new technique operates with not only transfer of boundary conditions but also with volume results and transfer of forces (current density load in case of electromagnetism), which allows the solution of full Maxwell's equations in FE space. The approach was successfully implemented for electromagnetic solution in the forward problem of Magnetic Field Tomography (MFT) based on Magnetoencephalography (MEG), where the scale of one neuron was considered as the smallest and the scale of whole-brain model as the largest. The time of computation was reduced about 100 times, with the initial requirements of direct computations without submodelling routine of 10 million elements.

  14. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  15. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 1, Theoretical background

    SciTech Connect

    Gartling, D.K.

    1996-05-01

    The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.

  16. Inductively coupled plasma-atomic emission spectroscopy: a computer controlled, scanning monochromator system for the rapid determination of the elements

    SciTech Connect

    Floyd, M.A.

    1980-03-01

    A computer controlled, scanning monochromator system specifically designed for the rapid, sequential determination of the elements is described. The monochromator is combined with an inductively coupled plasma excitation source so that elements at major, minor, trace, and ultratrace levels may be determined, in sequence, without changing experimental parameters other than the spectral line observed. A number of distinctive features not found in previously described versions are incorporated into the system here described. Performance characteristics of the entire system and several analytical applications are discussed.

  17. Evaluating micas in petrologic and metallogenic aspect: I-definitions and structure of the computer program MICA +

    NASA Astrophysics Data System (ADS)

    Yavuz, Fuat

    2003-12-01

    Micas are significant ferromagnesian minerals in felsic to mafic igneous, metamorphic, and hydrothermal rocks. Because of their considerable potential to reveal the physicochemical conditions of magmas in terms of petrologic and metallogenic aspects, mica chemistry is used extensively in the earth sciences. For example, the composition of phlogopite and biotite can be used to evaluate the intensive thermodynamic parameters of temperature ( T, °C), oxygen fugacity ( fO 2), and water fugacity ( fH 2O) of magmatic rocks. The halogen contents of micas permit the estimation of the fluorine and chlorine fugacities that may be used in understanding the metal transportation and deposition processes in hydrothermal ore deposits. The Mica + computer program has been written to edit and store electron-microprobe or wet-chemical mica analyses. This software calculates structural formulae and shares out the calculated anions into the I, M, T, and A sites. Mica + classifies micas in terms of composition and octahedral site-occupancy. It also calculates the intensive parameters such as fO 2, T, and fH 2O from the composition of biotite in equilibrium with K-feldspar and magnetite. Using the calculated F-OH and Cl-OH exchange systematics and various log ratios ( fH 2O/ fHF, fH 2O/ fHCl, fHCl/ fHF, XCl/ XOH, XF/ XOH, XF/ XCl) of mica analyses. Mica + gives valuable determinations about the characteristics of hydrothermal fluids associated with alteration and mineralization processes. The program output is generally in the form of screen outputs. However, by using the "Grf" files that come up with this program they can be visualized under the Grapher software both as binary and ternary diagrams. Mica analyses subjected to the Mica + program were calculated on the basis of 22+ z positive charges taking into account the procedure by the Commission on New Mineral Names Mica Subcommittee of 1998.

  18. Research on Quantum Authentication Methods for the Secure Access Control Among Three Elements of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Dong, Yumin; Xiao, Shufen; Ma, Hongyang; Chen, Libo

    2016-08-01

    Cloud computing and big data have become the developing engine of current information technology (IT) as a result of the rapid development of IT. However, security protection has become increasingly important for cloud computing and big data, and has become a problem that must be solved to develop cloud computing. The theft of identity authentication information remains a serious threat to the security of cloud computing. In this process, attackers intrude into cloud computing services through identity authentication information, thereby threatening the security of data from multiple perspectives. Therefore, this study proposes a model for cloud computing protection and management based on quantum authentication, introduces the principle of quantum authentication, and deduces the quantum authentication process. In theory, quantum authentication technology can be applied in cloud computing for security protection. This technology cannot be cloned; thus, it is more secure and reliable than classical methods.

  19. Virtual garden computer program for use in exploring the elements of biodiversity people want in cities.

    PubMed

    Shwartz, Assaf; Cheval, Helene; Simon, Laurent; Julliard, Romain

    2013-08-01

    Urban ecology is emerging as an integrative science that explores the interactions of people and biodiversity in cities. Interdisciplinary research requires the creation of new tools that allow the investigation of relations between people and biodiversity. It has been established that access to green spaces or nature benefits city dwellers, but the role of species diversity in providing psychological benefits remains poorly studied. We developed a user-friendly 3-dimensional computer program (Virtual Garden [www.tinyurl.com/3DVirtualGarden]) that allows people to design their own public or private green spaces with 95 biotic and abiotic features. Virtual Garden allows researchers to explore what elements of biodiversity people would like to have in their nearby green spaces while accounting for other functions that people value in urban green spaces. In 2011, 732 participants used our Virtual Garden program to design their ideal small public garden. On average gardens contained 5 different animals, 8 flowers, and 5 woody plant species. Although the mathematical distribution of flower and woody plant richness (i.e., number of species per garden) appeared to be similar to what would be expected by random selection of features, 30% of participants did not place any animal species in their gardens. Among those who placed animals in their gardens, 94% selected colorful species (e.g., ladybug [Coccinella septempunctata], Great Tit [Parus major], and goldfish), 53% selected herptiles or large mammals, and 67% selected non-native species. Older participants with a higher level of education and participants with a greater concern for nature designed gardens with relatively higher species richness and more native species. If cities are to be planned for the mutual benefit of people and biodiversity and to provide people meaningful experiences with urban nature, it is important to investigate people's relations with biodiversity further. Virtual Garden offers a standardized

  20. Virtual garden computer program for use in exploring the elements of biodiversity people want in cities.

    PubMed

    Shwartz, Assaf; Cheval, Helene; Simon, Laurent; Julliard, Romain

    2013-08-01

    Urban ecology is emerging as an integrative science that explores the interactions of people and biodiversity in cities. Interdisciplinary research requires the creation of new tools that allow the investigation of relations between people and biodiversity. It has been established that access to green spaces or nature benefits city dwellers, but the role of species diversity in providing psychological benefits remains poorly studied. We developed a user-friendly 3-dimensional computer program (Virtual Garden [www.tinyurl.com/3DVirtualGarden]) that allows people to design their own public or private green spaces with 95 biotic and abiotic features. Virtual Garden allows researchers to explore what elements of biodiversity people would like to have in their nearby green spaces while accounting for other functions that people value in urban green spaces. In 2011, 732 participants used our Virtual Garden program to design their ideal small public garden. On average gardens contained 5 different animals, 8 flowers, and 5 woody plant species. Although the mathematical distribution of flower and woody plant richness (i.e., number of species per garden) appeared to be similar to what would be expected by random selection of features, 30% of participants did not place any animal species in their gardens. Among those who placed animals in their gardens, 94% selected colorful species (e.g., ladybug [Coccinella septempunctata], Great Tit [Parus major], and goldfish), 53% selected herptiles or large mammals, and 67% selected non-native species. Older participants with a higher level of education and participants with a greater concern for nature designed gardens with relatively higher species richness and more native species. If cities are to be planned for the mutual benefit of people and biodiversity and to provide people meaningful experiences with urban nature, it is important to investigate people's relations with biodiversity further. Virtual Garden offers a standardized

  1. COYOTE II: A Finite Element Computer Program for nonlinear heat conduction problems. Part 2, User`s manual

    SciTech Connect

    Gartling, D.K.; Hogan, R.E.

    1994-10-01

    User instructions are given for the finite element computer program, COYOTE II. COYOTE II is designed for the multi-dimensional analysis of nonlinear heat conduction problems including the effects of enclosure radiation and chemical reaction. The theoretical background and numerical methods used in the program are documented in SAND94-1173. Examples of the use of the code are presented in SAND94-1180.

  2. Computational fluid dynamics analysis of SSME phase 2 and phase 2+ preburner injector element hydrogen flow paths

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    1992-01-01

    Phase 2+ Space Shuttle Main Engine powerheads, E0209 and E0215 degraded their main combustion chamber (MCC) liners at a faster rate than is normal for phase 2 powerheads. One possible cause of the accelerated degradation was a reduction of coolant flow through the MCC. Hardware changes were made to the preburner fuel leg which may have reduced the resistance and, therefore, pulled some of the hydrogen from the MCC coolant leg. A computational fluid dynamics (CFD) analysis was performed to determine hydrogen flow path resistances of the phase 2+ fuel preburner injector elements relative to the phase 2 element. FDNS was implemented on axisymmetric grids with the hydrogen assumed to be incompressible. The analysis was performed in two steps: the first isolated the effect of the different inlet areas and the second modeled the entire injector element hydrogen flow path.

  3. Fast computational integral imaging reconstruction by combined use of spatial filtering and rearrangement of elemental image pixels

    NASA Astrophysics Data System (ADS)

    Jang, Jae-Young; Cho, Myungjin

    2015-12-01

    In this paper, we propose a new fast computational integral imaging reconstruction (CIIR) scheme without the deterioration of the spatial filtering effect by combined use of spatial filtering and rearrangement of elemental image pixels. In the proposed scheme, the elemental image array (EIA) recorded by lenslet array is spatially filtered through the convolution of depth-dependent delta function array for a given depth. Then, the spatially filtered EIA is reconstructed as the 3D slice image using pixels of the elemental image rearrangement technique. Our scheme provides both the fast calculation with the same properties of the conventional CIIR and the improved visual quality of the reconstructed 3D slice image. To verify our scheme, we perform preliminary experiments and compare other techniques.

  4. COMGEN: A computer program for generating finite element models of composite materials at the micro level

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.

    1990-01-01

    COMGEN (Composite Model Generator) is an interactive FORTRAN program which can be used to create a wide variety of finite element models of continuous fiber composite materials at the micro level. It quickly generates batch or session files to be submitted to the finite element pre- and postprocessor PATRAN based on a few simple user inputs such as fiber diameter and percent fiber volume fraction of the composite to be analyzed. In addition, various mesh densities, boundary conditions, and loads can be assigned easily to the models within COMGEN. PATRAN uses a session file to generate finite element models and their associated loads which can then be translated to virtually any finite element analysis code such as NASTRAN or MARC.

  5. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.

  6. Optical Computing Using Interference Filters as Nonlinear Optical Logic Gates and Holographic Optical Elements as Optical Interconnects.

    NASA Astrophysics Data System (ADS)

    Wang, Lon A.

    This dissertation experimentally explores digital optical computing and optical interconnects with theoretical supports, from the physics of materials and the optimization of devices to system realization. The trend of optical computing is highlighted with the emphasis on the current development of its basic constituent elements, and a couple of algorithms selected to pave the way for utilizing bistable devices for their optical implementations. Optical bistable devices function as "optical transistors" in optical computing. The physics of dispersive optical bistability is briefly described. Bistable ZnS interference filters are discussed in detail regarding their linear and nonlienar characteristics. The optimization of switching characteristics for a bistable ZnS interference filter is discussed, and experimental results are shown. Symbolic substitution which fully takes advantage of regular optical interconnects constitutes two steps: pattern recognition and symbol scription. Two experiments on two digital pattern recognitions and one on a simple but complete symbolic substitution have been demonstrated. The extension of these experiments is an implementation of a binary adder. A one-bit full adder which is a basic block for a computer has been explored experimentally and demonstrated in an all-optical way. The utilization of a bistable device as a nonlinear decision-making element is further demonstrated in an associative memory experiment by incorporating a Vander Lugt matched filter to discriminate two partial fingerprints. The thresholding function of a bistable device enhances the S/N ratio and helps discrimination in associative memory. As the clocking speed of a computer goes higher, e.g. greater than several GHz, the clock signal distribution and packaging become serious problems in VLSI technology. The use of optical interconnects introduces a possible solution. A unique element for holographic optical interconnects, which combines advantages of

  7. CUERVO: A finite element computer program for nonlinear scalar transport problems

    SciTech Connect

    Sirman, M.B.; Gartling, D.K.

    1995-11-01

    CUERVO is a finite element code that is designed for the solution of multi-dimensional field problems described by a general nonlinear, advection-diffusion equation. The code is also applicable to field problems described by diffusion, Poisson or Laplace equations. The finite element formulation and the associated numerical methods used in CUERVO are outlined here; detailed instructions for use of the code are also presented. Example problems are provided to illustrate the use of the code.

  8. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology

  9. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology

  10. Parallel Object-Oriented Computation Applied to a Finite Element Problem

    NASA Technical Reports Server (NTRS)

    Weissman, Jon B.; Grimshaw, Andrew S.; Ferraro, Robert

    1993-01-01

    The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes, extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used.

  11. Analytical model and finite element computation of braking torque in electromagnetic retarder

    NASA Astrophysics Data System (ADS)

    Ye, Lezhi; Yang, Guangzhao; Li, Desheng

    2014-12-01

    An analytical model has been developed for analyzing the braking torque in electromagnetic retarder by flux tube and armature reaction method. The magnetic field distribution in air gap, the eddy current induced in the rotor and the braking torque are calculated by the developed model. Two-dimensional and three-dimensional finite element models for retarder have also been developed. Results from the analytical model are compared with those from finite element models. The validity of these three models is checked by the comparison of the theoretical predictions and the measurements from an experimental prototype. The influencing factors of braking torque have been studied.

  12. Isoparametric 3-D Finite Element Mesh Generation Using Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kayrak, C.; Ozsoy, T.

    1985-01-01

    An isoparametric 3-D finite element mesh generator was developed with direct interface to an interactive geometric modeler program called POLYGON. POLYGON defines the model geometry in terms of boundaries and mesh regions for the mesh generator. The mesh generator controls the mesh flow through the 2-dimensional spans of regions by using the topological data and defines the connectivity between regions. The program is menu driven and the user has a control of element density and biasing through the spans and can also apply boundary conditions, loads interactively.

  13. Microscopy and elemental analysis in tissue samples using computed microtomography with synchrotron x-rays

    SciTech Connect

    Spanne, P.; Rivers, M.L.

    1988-01-01

    The initial development shows that CMT using synchrotron x-rays can be developed to ..mu..m spatial resolution and perhaps even better. This creates a new microscopy technique which is of special interest in morphological studies of tissues, since no chemical preparation or slicing of the sample is necessary. The combination of CMT with spatial resolution in the ..mu..m range and elemental mapping with sensitivity in the ppM range results in a new tool for elemental mapping at the cellular level. 7 refs., 1 fig.

  14. Genome-wide computational prediction and analysis of core promoter elements across plant monocots and dicots

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...

  15. Computation of strain energy release rates for skin-stiffener debonds modeled with plate elements

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Raju, I. S.; Davila, C. G.; Sleight, D. W.

    1993-01-01

    An efficient method for predicting the strength of debonded composite skin-stiffener configurations is presented. This method, which is based on fracture mechanics, models the skin and the stiffener with two-dimensional (2D) plate elements instead of three-dimensional (3D) solid elements. The skin and stiffener flange nodes are tied together by two modeling techniques. In one technique, the corresponding flange and skin nodes are required to have identical translational and rotational degrees-of-freedom. In the other technique, the corresponding flange and skin nodes are only required to have identical translational degrees-of-freedom. Strain energy release rate formulas are proposed for both modeling techniques. These formulas are used for skin-stiffener debond cases with and without cylindrical bending deformations. The cylindrical bending results are compared with plane-strain finite element results. Excellent agreement between the two sets of results is obtained when the second technique is used. Thus, from these limited studies, a preferable modeling technique for skin-stiffener debond analysis using plate elements is established.

  16. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    EPA Science Inventory

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  17. Computations of M sub 2 and K sub 1 ocean tidal perturbations in satellite elements

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1974-01-01

    Semi-analytic perturbation equations for the influence of M2 and K1 ocean tidal constituents on satellite motion are expanded into multi-dimensional Fourier series and calculations made for the BE-C satellite. Perturbation in the orbital elements are compared to those of the long period solid earth tides.

  18. MPSalsa Version 1.5: A Finite Element Computer Program for Reacting Flow Problems: Part 1 - Theoretical Development

    SciTech Connect

    Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.; Moffat, H.K.; Salinger, A.G.; Schmidt, R.C.; Shadid, J.N.; Smith, T.M.

    1999-01-01

    The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.

  19. Computation of Dancoff Factors for Fuel Elements Incorporating Randomly Packed TRISO Particles

    SciTech Connect

    J. L. Kloosterman; Abderrafi M. Ougouag

    2005-01-01

    A new method for estimating the Dancoff factors in pebble beds has been developed and implemented within two computer codes. The first of these codes, INTRAPEB, is used to compute Dancoff factors for individual pebbles taking into account the random packing of TRISO particles within the fuel zone of the pebble and explicitly accounting for the finite geometry of the fuel kernels. The second code, PEBDAN, is used to compute the pebble-to-pebble contribution to the overall Dancoff factor. The latter code also accounts for the finite size of the reactor vessel and for the proximity of reflectors, as well as for fluctuations in the pebble packing density that naturally arises in pebble beds.

  20. Efficient Inverse Isoparametric Mapping Algorithm for Whole-Body Computed Tomography Registration Using Deformations Predicted by Nonlinear Finite Element Modeling

    PubMed Central

    Li, Mao; Wittek, Adam; Miller, Karol

    2014-01-01

    Biomechanical modeling methods can be used to predict deformations for medical image registration and particularly, they are very effective for whole-body computed tomography (CT) image registration because differences between the source and target images caused by complex articulated motions and soft tissues deformations are very large. The biomechanics-based image registration method needs to deform the source images using the deformation field predicted by finite element models (FEMs). In practice, the global and local coordinate systems are used in finite element analysis. This involves the transformation of coordinates from the global coordinate system to the local coordinate system when calculating the global coordinates of image voxels for warping images. In this paper, we present an efficient numerical inverse isoparametric mapping algorithm to calculate the local coordinates of arbitrary points within the eight-noded hexahedral finite element. Verification of the algorithm for a nonparallelepiped hexahedral element confirms its accuracy, fast convergence, and efficiency. The algorithm's application in warping of the whole-body CT using the deformation field predicted by means of a biomechanical FEM confirms its reliability in the context of whole-body CT registration. PMID:24828796

  1. Transmission in nonuniform ducts - A comparative evaluation of finite element and weighted residuals computational schemes. [acoustic propagation

    NASA Technical Reports Server (NTRS)

    Eversman, W.; Astley, R. J.; Thanh, V. P.

    1977-01-01

    The Method of Weighted Residuals (MWR) and the Finite Element Method (FEM) are considered as computational schemes in the problem of acoustic transmission in nonuniform ducts. MWR is presented in an improved form which includes the interaction of acoustic modes (irrotational) and hydrodynamic modes (rotational). FEM is based on a weighted residuals formulation using eight noded isoparametric elements. Both are applicable to two-dimensional and axially symmetric problems. Calculations are made for several sample problems to demonstrate accuracy and relative efficiency. One test case has implications in the phenomenon of subsonic acoustic choking and it is found that a large transmission loss is not an automatic consequence of propagation against a high subsonic mean flow.

  2. Three-Dimensional Effects on Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Watson, Ralph D.

    2002-01-01

    In an effort to discover the causes for disagreement between previous 2-D computations and nominally 2-D experiment for flow over the 3-clement McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, document's venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using 3-D structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects of the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of all off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too earl or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower die the levels near maximum lift conditions.

  3. Improved Discontinuity-capturing Finite Element Techniques for Reaction Effects in Turbulence Computation

    NASA Astrophysics Data System (ADS)

    Corsini, A.; Rispoli, F.; Santoriello, A.; Tezduyar, T. E.

    2006-09-01

    Recent advances in turbulence modeling brought more and more sophisticated turbulence closures (e.g. k-ɛ, k-ɛ - v 2- f, Second Moment Closures), where the governing equations for the model parameters involve advection, diffusion and reaction terms. Numerical instabilities can be generated by the dominant advection or reaction terms. Classical stabilized formulations such as the Streamline Upwind/Petrov Galerkin (SUPG) formulation (Brook and Hughes, comput methods Appl Mech Eng 32:199 255, 1982; Hughes and Tezduyar, comput methods Appl Mech Eng 45: 217 284, 1984) are very well suited for preventing the numerical instabilities generated by the dominant advection terms. A different stabilization however is needed for instabilities due to the dominant reaction terms. An additional stabilization term, called the diffusion for reaction-dominated (DRD) term, was introduced by Tezduyar and Park (comput methods Appl Mech Eng 59:307 325, 1986) for that purpose and improves the SUPG performance. In recent years a new class of variational multi-scale (VMS) stabilization (Hughes, comput methods Appl Mech Eng 127:387 401, 1995) has been introduced, and this approach, in principle, can deal with advection diffusion reaction equations. However, it was pointed out in Hanke (comput methods Appl Mech Eng 191:2925 2947) that this class of methods also need some improvement in the presence of high reaction rates. In this work we show the benefits of using the DRD operator to enhance the core stabilization techniques such as the SUPG and VMS formulations. We also propose a new operator called the DRDJ (DRD with the local variation jump) term, targeting the reduction of numerical oscillations in the presence of both high reaction rates and sharp solution gradients. The methods are evaluated in the context of two stabilized methods: the classical SUPG formulation and a recently-developed VMS formulation called the V-SGS (Corsini et al. comput methods Appl Mech Eng 194:4797 4823, 2005

  4. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    SciTech Connect

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; Syratchev, I.; /CERN

    2009-06-19

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  5. Three Aspects of PLATO Use at Chanute AFB: CBE Production Techniques, Computer-Aided Management, Formative Development of CBE Lessons.

    ERIC Educational Resources Information Center

    Klecka, Joseph A.

    This report describes various aspects of lesson production and use of the PLATO system at Chanute Air Force Base. The first chapter considers four major factors influencing lesson production: (1) implementation of the "lean approach," (2) the Instructional Systems Development (ISD) role in lesson production, (3) the transfer of programmed…

  6. A geometrically-conservative, synchronized, flux-corrected remap for arbitrary Lagrangian-Eulerian computations with nodal finite elements

    NASA Astrophysics Data System (ADS)

    López Ortega, A.; Scovazzi, G.

    2011-07-01

    This article describes a conservative synchronized remap algorithm applicable to arbitrary Lagrangian-Eulerian computations with nodal finite elements. In the proposed approach, ideas derived from flux-corrected transport (FCT) methods are extended to conservative remap. Unique to the proposed method is the direct incorporation of the geometric conservation law (GCL) in the resulting numerical scheme. It is shown here that the geometric conservation law allows the method to inherit the positivity preserving and local extrema diminishing (LED) properties typical of FCT schemes. The proposed framework is extended to the systems of equations that typically arise in meteorological and compressible flow computations. The proposed algorithm remaps the vector fields associated with these problems by means of a synchronized strategy. The present paper also complements and extends the work of the second author on nodal-based methods for shock hydrodynamics, delivering a fully integrated suite of Lagrangian/remap algorithms for computations of compressible materials under extreme load conditions. Extensive testing in one, two, and three dimensions shows that the method is robust and accurate under typical computational scenarios.

  7. Energy law preserving C{sup 0} finite element schemes for phase field models in two-phase flow computations

    SciTech Connect

    Hua Jinsong; Lin Ping; Liu Chun; Wang Qi

    2011-08-10

    Highlights: {yields} We study phase-field models for multi-phase flow computation. {yields} We develop an energy-law preserving C0 FEM. {yields} We show that the energy-law preserving method work better. {yields} We overcome unphysical oscillation associated with the Cahn-Hilliard model. - Abstract: We use the idea in to develop the energy law preserving method and compute the diffusive interface (phase-field) models of Allen-Cahn and Cahn-Hilliard type, respectively, governing the motion of two-phase incompressible flows. We discretize these two models using a C{sup 0} finite element in space and a modified midpoint scheme in time. To increase the stability in the pressure variable we treat the divergence free condition by a penalty formulation, under which the discrete energy law can still be derived for these diffusive interface models. Through an example we demonstrate that the energy law preserving method is beneficial for computing these multi-phase flow models. We also demonstrate that when applying the energy law preserving method to the model of Cahn-Hilliard type, un-physical interfacial oscillations may occur. We examine the source of such oscillations and a remedy is presented to eliminate the oscillations. A few two-phase incompressible flow examples are computed to show the good performance of our method.

  8. Non-uniform FFT for the finite element computation of the micromagnetic scalar potential

    NASA Astrophysics Data System (ADS)

    Exl, L.; Schrefl, T.

    2014-08-01

    We present a quasi-linearly scaling, first order polynomial finite element method for the solution of the magnetostatic open boundary problem by splitting the magnetic scalar potential. The potential is determined by solving a Dirichlet problem and evaluation of the single layer potential by a fast approximation technique based on Fourier approximation of the kernel function. The latter approximation leads to a generalization of the well-known convolution theorem used in finite difference methods. We address it by a non-uniform FFT approach. Overall, our method scales O(M+N+Nlog N) for N nodes and M surface triangles. We confirm our approach by several numerical tests.

  9. U.S. Department of Energy Office of Inspector General report on audit of selected aspects of the unclassified computer security program at a DOE headquarters computing facility

    SciTech Connect

    1995-07-31

    The purpose of this audit was to evaluate the effectiveness of the unclassified computer security program at the Germantown Headquarters Administrative Computer Center (Center). The Department of Energy (DOE) relies on the application systems at the Germantown Headquarters Administrative Computer Center to support its financial, payroll and personnel, security, and procurement functions. The review was limited to an evaluation of the administrative, technical, and physical safeguards governing utilization of the unclassified computer system which hosts many of the Department`s major application systems. The audit identified weaknesses in the Center`s computer security program that increased the risk of unauthorized disclosure or loss of sensitive data. Specifically, the authors found that (1) access to sensitive data was not limited to individuals who had a need for the information, and (2) accurate and complete information was not maintained on the inventory of tapes at the Center. Furthermore, the risk of unauthorized disclosure and loss of sensitive data was increased because other controls, such as physical security, had not been adequately implemented at the Center. Management generally agreed with the audit conclusions and recommendations, and initiated a number of actions to improve computer security at the Center.

  10. A new finite element and finite difference hybrid method for computing electrostatics of ionic solvated biomolecule

    NASA Astrophysics Data System (ADS)

    Ying, Jinyong; Xie, Dexuan

    2015-10-01

    The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model for calculating electrostatics of ionic solvated biomolecule. In this paper, a new finite element and finite difference hybrid method is presented to solve PBE efficiently based on a special seven-overlapped box partition with one central box containing the solute region and surrounded by six neighboring boxes. In particular, an efficient finite element solver is applied to the central box while a fast preconditioned conjugate gradient method using a multigrid V-cycle preconditioning is constructed for solving a system of finite difference equations defined on a uniform mesh of each neighboring box. Moreover, the PBE domain, the box partition, and an interface fitted tetrahedral mesh of the central box can be generated adaptively for a given PQR file of a biomolecule. This new hybrid PBE solver is programmed in C, Fortran, and Python as a software tool for predicting electrostatics of a biomolecule in a symmetric 1:1 ionic solvent. Numerical results on two test models with analytical solutions and 12 proteins validate this new software tool, and demonstrate its high performance in terms of CPU time and memory usage.

  11. Comparison between two computer codes for PIXE studies applied to trace element analysis in amniotic fluid

    NASA Astrophysics Data System (ADS)

    Gertner, I.; Heber, O.; Zajfman, J.; Zajfman, D.; Rosner, B.

    1989-01-01

    Two different methods of analysis applicable for PIXE data are introduced and compared. In the first method Gaussian shaped peaks are fitted to the X-ray spectrum, and the complete analysis can be done on a microcomputer. The second is based on the Bayesian deconvolution method for simultaneous peak fitting and has to be carried out on a larger IBM computer. The advantage of the second method becomes evident for regions of poor statistics or where many overlapping peaks occur in the spectrum. The comparisons between the methods made on PIXE measurements obtained from 55 amniotic fluid samples gave satisfactory agreement.

  12. Computer-originated polarizing holographic optical element recorded in photopolymerizable layers.

    PubMed

    Carré, C; Habraken, S; Roose, S

    1993-05-01

    The photosensitive system that is used in most cases to produce holographic optical holograms is dichromated gelatin. Other materials may be used, in particular, photopolymerizable layers. In the present investigation, we set out to use the polymer developed in the Laboratoire de Photochimie Générale in Mulhouse in order to duplicate a computer-generated hologram. Our technique is intended to generate polarizing properties. We took into account the fact that no wet chemistry processing is required; grating fringe spacings are not distorted through chemical development. PMID:19802257

  13. Towards Exascale Computing with NUMA: an Element-based Galerkin Nonhydrostatic Global and Mesoscale Atmospheric Modeling

    NASA Astrophysics Data System (ADS)

    Giraldo, F.; Mueller, A.; Kopera, M. A.; Abdi, D. S.; Wilcox, L.

    2015-12-01

    In this talk, we shall describe the NUMA atmospheric model, focusing in particular on its unified continuous/discontinuous (CG and DG) Galerkin numerical methods that are used to represent the spatial derivatives. We shall describe how these two methods are formulated in a unified approach and the advantages that this brings. We will also report on the progress in extending NUMA to using adaptive mesh refinement. Lastly, we will report on the scalability and performance of NUMA on the leadership computing facilities (LCF) of the Department of Energy where we have scaled NUMA to over 3 million MPI threads achieving a 90% efficiency.

  14. Books and monographs on finite element technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1985-01-01

    The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.

  15. NASTRAN data generation of helicopter fuselages using interactive graphics. [preprocessor system for finite element analysis using IBM computer

    NASA Technical Reports Server (NTRS)

    Sainsbury-Carter, J. B.; Conaway, J. H.

    1973-01-01

    The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.

  16. Evaluation of accuracy of non-linear finite element computations for surgical simulation: study using brain phantom.

    PubMed

    Ma, J; Wittek, A; Singh, S; Joldes, G; Washio, T; Chinzei, K; Miller, K

    2010-12-01

    In this paper, the accuracy of non-linear finite element computations in application to surgical simulation was evaluated by comparing the experiment and modelling of indentation of the human brain phantom. The evaluation was realised by comparing forces acting on the indenter and the deformation of the brain phantom. The deformation of the brain phantom was measured by tracking 3D motions of X-ray opaque markers, placed within the brain phantom using a custom-built bi-plane X-ray image intensifier system. The model was implemented using the ABAQUS(TM) finite element solver. Realistic geometry obtained from magnetic resonance images and specific constitutive properties determined through compression tests were used in the model. The model accurately predicted the indentation force-displacement relations and marker displacements. Good agreement between modelling and experimental results verifies the reliability of the finite element modelling techniques used in this study and confirms the predictive power of these techniques in surgical simulation. PMID:21153973

  17. A nonlocal modified Poisson-Boltzmann equation and finite element solver for computing electrostatics of biomolecules

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Jiang, Yi

    2016-10-01

    The nonlocal dielectric approach has been studied for more than forty years but only limited to water solvent until the recent work of Xie et al. (2013) [20]. As the development of this recent work, in this paper, a nonlocal modified Poisson-Boltzmann equation (NMPBE) is proposed to incorporate nonlocal dielectric effects into the classic Poisson-Boltzmann equation (PBE) for protein in ionic solvent. The focus of this paper is to present an efficient finite element algorithm and a related software package for solving NMPBE. Numerical results are reported to validate this new software package and demonstrate its high performance for protein molecules. They also show the potential of NMPBE as a better predictor of electrostatic solvation and binding free energies than PBE.

  18. Computational flow simulation of liquid oxygen in a SSME preburner injector element LOX post

    NASA Technical Reports Server (NTRS)

    Rocker, Marvin

    1990-01-01

    Liquid oxygen (LOX) is simulated as an incompressible flow through a Space Shuttle main engine fuel preburner injector element LOX post for the full range of operating conditions. Axial profiles of axial velocity and static pressure are presented. For each operating condition analyzed, the minimum pressure downstream of the orifice is compared to the vapor pressure to determine if cavitation could occur. Flow visualization is provided by velocity vectors and stream function contours. The results indicate that the minimum pressure is too high for cavitation to occur. To establish confidence in the CFD analysis, the simulation is repeated with water flow through a superscaled LOX post and compared with experimental results. The agreement between calculated and experimental results is very good.

  19. An Objective Evaluation of Mass Scaling Techniques Utilizing Computational Human Body Finite Element Models.

    PubMed

    Davis, Matthew L; Scott Gayzik, F

    2016-10-01

    Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios.

  20. An Objective Evaluation of Mass Scaling Techniques Utilizing Computational Human Body Finite Element Models.

    PubMed

    Davis, Matthew L; Scott Gayzik, F

    2016-10-01

    Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios. PMID:27457051

  1. Generalized odontodysplasia in a 5-year-old patient with Hallermann-Streiff syndrome: clinical aspects, cone beam computed tomography findings, and conservative clinical approach.

    PubMed

    Damasceno, Juliana Ximenes; Couto, José Luciano Pimenta; Alves, Karla Shangela da Silva; Chaves, Cauby Maia; Costa, Fábio Wildson Gurgel; Pimenta, Alynne de Menezes Vieira; Fonteles, Cristiane Sá Roriz

    2014-08-01

    This article aims to report the main clinical aspects, cone beam computed tomography (CBCT) findings, and conservative oral rehabilitation in a child born from a consanguineous marriage who presented with Hallermann-Streiff syndrome (HSS) and generalized odontodysplasia. A 5-year-old girl presented with a diagnosis of HSS for oral evaluation. Radiographically, all teeth showed wide pulp chambers and roots with thin dentinal walls and open apices, resembling ghost teeth and indicating a diagnosis of odontodysplasia. Oral rehabilitation consisted of partial dentures that were regularly adjusted to conform the device with the pattern of growth and development of the child. CBCT scan provided great insight into HSS, allowing a detailed view of the morphologic aspects and associated trabecular bone pattern. Treatment of these 2 rare conditions in young children must consider the stage of growth and development. Although extremely rare in HSS, odontodysplasia should be investigated and conservatively managed in young children.

  2. Thinking Together: Exploring Aspects of Shared Thinking between Young Children during a Computer-Based Literacy Task

    ERIC Educational Resources Information Center

    Wild, Mary

    2011-01-01

    This study considers in what ways sustained shared thinking between young children aged 5-6 years can be facilitated by working in dyads on a computer-based literacy task. The study considers 107 observational records of 44 children from 6 different schools, in Oxfordshire in the UK, collected over the course of a school year. The study raises…

  3. Computer literacy and attitudes among students in 16 European dental schools: current aspects, regional differences and future trends.

    PubMed

    Mattheos, N; Nattestad, A; Schittek, M; Attström, R

    2002-02-01

    A questionnaire survey was carried out to investigate the competence and attitude of dental students towards computers. The current study presents the findings deriving from 590 questionnaires collected from 16 European dental schools from 9 countries between October 1998 and October 1999. The results suggest that 60% of students use computers for their education, while 72% have access to the Internet. The overall figures, however, disguise major differences between the various universities. Students in Northern and Western Europe seem to rely mostly on university facilities to access the Internet. The same however, is not true for students in Greece and Spain, who appear to depend on home computers. Less than half the students have been exposed to some form of computer literacy education in their universities, with the great majority acquiring their competence in other ways. The Information and Communication Technology (ICT) skills of the average dental student, within this limited sample of dental schools, do not facilitate full use of new media available. In addition, if the observed regional differences are valid, there may be an educational and political problem that could intensify inequalities among professionals in the future. To minimize this potential problem, closer cooperation between academic institutions, with sharing of resources and expertise, is recommended. PMID:11872071

  4. Human factors in the presentation of computer-generated information - Aspects of design and application in automated flight traffic

    NASA Technical Reports Server (NTRS)

    Roske-Hofstrand, Renate J.

    1990-01-01

    The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.

  5. The effects of computer game elements in physics instruction software for middle schools: A study of cognitive and affective gains

    NASA Astrophysics Data System (ADS)

    Vasquez, David Alan

    Can the educational effectiveness of physics instruction software for middle schoolers be improved by employing "game elements" commonly found in recreational computer games? This study utilized a selected set of game elements to contextualize and embellish physics word problems with the aim of making such problems more engaging. Game elements used included: (1) a fantasy-story context with developed characters; and (2) high-end graphics and visual effects. The primary purpose of the study was to find out if the added production cost of using such game elements was justified by proportionate gains in physics learning. The theoretical framework for the study was a modified version of Lepper and Malone's "intrinsically-motivating game elements" model. A key design issue in this model is the concept of "endogeneity", or the degree to which the game elements used in educational software are integrated with its learning content. Two competing courseware treatments were custom-designed and produced for the study; both dealt with Newton's first law. The first treatment (T1) was a 45 minute interactive tutorial that featured cartoon characters, color animations, hypertext, audio narration, and realistic motion simulations using the Interactive PhysicsspTM software. The second treatment (T2) was similar to the first except for the addition of approximately three minutes of cinema-like sequences where characters, game objectives, and a science-fiction story premise were described and portrayed with high-end graphics and visual effects. The sample of 47 middle school students was evenly divided between eighth and ninth graders and between boys and girls. Using a pretest/posttest experimental design, the independent variables for the study were: (1) two levels of treatment; (2) gender; and (3) two schools. The dependent variables were scores on a written posttest for both: (1) physics learning, and (2) attitude toward physics learning. Findings indicated that, although

  6. Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.

  7. Semi-automatic computer construction of three-dimensional shapes for the finite element method.

    PubMed

    Aharon, S; Bercovier, M

    1993-12-01

    Precise estimation of spatio-temporal distribution of ions (or other constitutives) in three-dimensional geometrical configuration plays a major role in biology. Since a direct experimental information regarding the free intracellular Ca2+ spatio-temporal distribution is not available to date, mathematical models have been developed. Most of the existing models are based on the classical numerical method of finite-difference (FD). Using this method one is limited when dealing with complicated geometry, general boundary conditions and variable or non-linear material properties. These difficulties are easily solved when the finite-element-method (FEM) is employed. The first step in the implementation of the FEM procedure is the mesh generation which is the single most tedious, time consuming task and vulnerable to mistake. In order to overcome these limitations we developed a new interface called AUTOMESH. This tool is used as a preprocessor program which generates two- and three-dimensional meshes for some known and often-used shapes in neurobiology. AUTOMESH creates an appropriate mesh by using the mesh generator commercial tool of FIDAP.

  8. Structure and micro-computed tomography-based finite element modeling of Toucan beak.

    PubMed

    Seki, Yasuaki; Mackey, Mason; Meyers, Marc A

    2012-05-01

    Bird beaks are one of the most fascinating sandwich composites in nature. Their design is composed of a keratinous integument and a bony foam core. We evaluated the structure and mechanical properties of a Toucan beak to establish structure-property relationships. We revealed the hierarchical structure of the Toucan beak by microscopy techniques. The integument consists of 50 μm polygonal keratin tiles with ~7.5 nm embedded intermediate filaments. The branched intermediate filaments were visualized by TEM tomography techniques. The bony foam core or trabecular bone is a closed-cell foam, which serves as a stiffener for the beak. The tridimensional foam structure was reconstructed by μ-CT scanning to create a model for the finite element analysis (FEA). The mechanical response of the beak foam including trabeculae and cortical shell was measured in tension and compression. We found that Young's modulus is 3 (S.D. 2.2) GPa for the trabeculae and 0.3 (S.D. 0.2) GPa for the cortical shell. After obtaining the material parameters, the deformation and microscopic failure of foam were calculated by FEA. The calculations agree well with the experimental results. PMID:22498278

  9. Computational Study of Laminar Flow Control on a Subsonic Swept Wing Using Discrete Roughness Elements

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; Streett, Craig L.; Carpenter, Mark H.

    2011-01-01

    A combination of parabolized stability equations and secondary instability theory has been applied to a low-speed swept airfoil model with a chord Reynolds number of 7.15 million, with the goals of (i) evaluating this methodology in the context of transition prediction for a known configuration for which roughness based crossflow transition control has been demonstrated under flight conditions and (ii) of analyzing the mechanism of transition delay via the introduction of discrete roughness elements (DRE). Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes, so as to weaken the growth of naturally occurring, linearly more unstable crossflow modes. Therefore, a synthesis of receptivity, linear and nonlinear growth of stationary crossflow disturbances, and the ensuing development of high frequency secondary instabilities is desirable to understand the experimentally observed transition behavior. With further validation, such higher fidelity prediction methodology could be utilized to assess the potential for crossflow transition control at even higher Reynolds numbers, where experimental data is currently unavailable.

  10. Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2016-01-01

    Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.

  11. Development of a Computationally Efficient, High Fidelity, Finite Element Based Hall Thruster Model

    NASA Technical Reports Server (NTRS)

    Jacobson, David (Technical Monitor); Roy, Subrata

    2004-01-01

    This report documents the development of a two dimensional finite element based numerical model for efficient characterization of the Hall thruster plasma dynamics in the framework of multi-fluid model. Effect of the ionization and the recombination has been included in the present model. Based on the experimental data, a third order polynomial in electron temperature is used to calculate the ionization rate. The neutral dynamics is included only through the neutral continuity equation in the presence of a uniform neutral flow. The electrons are modeled as magnetized and hot, whereas ions are assumed magnetized and cold. The dynamics of Hall thruster is also investigated in the presence of plasma-wall interaction. The plasma-wall interaction is a function of wall potential, which in turn is determined by the secondary electron emission and sputtering yield. The effect of secondary electron emission and sputter yield has been considered simultaneously, Simulation results are interpreted in the light of experimental observations and available numerical solutions in the literature.

  12. Parallel computation in a three-dimensional elastic-plastic finite-element analysis

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Bigelow, C. A.; Newman, J. C., Jr.

    1992-01-01

    A CRAY parallel processing technique called autotasking was implemented in a three-dimensional elasto-plastic finite-element code. The technique was evaluated on two CRAY supercomputers, a CRAY 2 and a CRAY Y-MP. Autotasking was implemented in all major portions of the code, except the matrix equations solver. Compiler directives alone were not able to properly multitask the code; user-inserted directives were required to achieve better performance. It was noted that the connect time, rather than wall-clock time, was more appropriate to determine speedup in multiuser environments. For a typical example problem, a speedup of 2.1 (1.8 when the solution time was included) was achieved in a dedicated environment and 1.7 (1.6 with solution time) in a multiuser environment on a four-processor CRAY 2 supercomputer. The speedup on a three-processor CRAY Y-MP was about 2.4 (2.0 with solution time) in a multiuser environment.

  13. An assessment of the performance of the Spanwise Iron Magnet rolling moment generating system for magnetic suspension and balance systems using the finite element computer program GFUN

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1982-01-01

    The development of a powerful method of magnetic roll torque generation is essential before construction of a large magnetic suspension and balance system (LMSBS) can be undertaken. Some preliminary computed data concerning a relatively new dc scheme, referred to as the spanwise iron magnet scheme are presented. Computations made using the finite element computer program 'GFUN' indicate that adequate torque is available for at least a first generation LMSBS. Torque capability appears limited principally by current electromagnet technology.

  14. Computationally-efficient finite-element-based thermal and electromagnetic models of electric machines

    NASA Astrophysics Data System (ADS)

    Zhou, Kan

    With the modern trend of transportation electrification, electric machines are a key component of electric/hybrid electric vehicle (EV/HEV) powertrains. It is therefore important that vehicle powertrain-level and system-level designers and control engineers have access to accurate yet computationally-efficient (CE), physics-based modeling tools of the thermal and electromagnetic (EM) behavior of electric machines. In this dissertation, CE yet sufficiently-accurate thermal and EM models for electric machines, which are suitable for use in vehicle powertrain design, optimization, and control, are developed. This includes not only creating fast and accurate thermal and EM models for specific machine designs, but also the ability to quickly generate and determine the performance of new machine designs through the application of scaling techniques to existing designs. With the developed techniques, the thermal and EM performance can be accurately and efficiently estimated. Furthermore, powertrain or system designers can easily and quickly adjust the characteristics and the performance of the machine in ways that are favorable to the overall vehicle performance.

  15. Finite element techniques in computational time series analysis of turbulent flows

    NASA Astrophysics Data System (ADS)

    Horenko, I.

    2009-04-01

    In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical

  16. [Development of a computer program to simulate the predictions of the replaced elements model of Pavlovian conditioning].

    PubMed

    Vogel, Edgar H; Díaz, Claudia A; Ramírez, Jorge A; Jarur, Mary C; Pérez-Acosta, Andrés M; Wagner, Allan R

    2007-08-01

    Despite of the apparent simplicity of Pavlovian conditioning, research on its mechanisms has caused considerable debate, such as the dispute about whether the associated stimuli are coded in an "elementistic"(a compound stimuli is equivalent to the sum of its components) or a "configural" (a compound stimuli is a unique exemplar) fashion. This controversy is evident in the abundant research on the contrasting predictions of elementistic and the configural models. Recently, some mixed solutions have been proposed, which, although they have the advantages of both approaches, are difficult to evaluate due to their complexity. This paper presents a computer program to conduct simulations of a mixed model ( replaced elements model or REM). Instructions and examples are provided to use the simulator for research and educational purposes.

  17. Structure analysis of the primary mirror support for the TIM using computer-aided finite element method

    NASA Astrophysics Data System (ADS)

    Farah Simon, Alejandro; Pedrayes, Maria H.; Ruiz Schneider, Elfego; Sierra, Gerardo; Quiros-Pacheco, Fernando; Godoy, Javier; Sohn, Erika

    2000-08-01

    The Mexican Infrared Telescope is one of the most important projects in the Institute for Astronomy of the National University of Mexico. As part of the design we pretend to simulate different components of the telescope by the Finite Element Method (FEM). One of the most important parts of the structure is the primary mirror support. This structure is under stress, causing deformations in the primary mirror; these deformations shouldn't be over 40 nanometers, which is the maximum permissible tolerance. One of the most interesting subjects to develop in this project is to make the segmented primary mirror to work like if it were a monolithic one. Each segment has six degrees of freedom, whose control needs actuators and sensors with stiff mechanical structures. Our purpose is to achieve these levels of design using FEM aided by computer and we pretend to study several models of the structure array using the Conceptual Design Method, in an effort to optimize the design.

  18. Computed-tomography scan-based finite element analysis of stress distribution in premolars restored with composite resin

    NASA Astrophysics Data System (ADS)

    Kantardžić, I.; Vasiljević, D.; Blažić, L.; Puškar, T.; Tasić, M.

    2012-05-01

    Mechanical properties of restorative material have an effect on stress distribution in the tooth structure and the restorative material during mastication. The aim of this study was to investigate the influence of restorative materials with different moduli of elasticity on stress distribution in the three-dimensional (3D) solid tooth model. Computed tomography scan data of human maxillary second premolars were used for 3D solid model generation. Four composite resins with a modulus of elasticity of 6700, 9500, 14 100 and 21 000 MPa were considered to simulate four different clinical direct restoration types. Each model was subjected to a resulting force of 200 N directed to the occlusal surface, and stress distribution and maximal von Mises stresses were calculated using finite-element analysis. We found that the von Mises stress values and stress distribution in tooth structures did not vary considerably with changing the modulus of elasticity of restorative material.

  19. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still

  20. Computation of full-field displacements in a scaffold implant using digital volume correlation and finite element analysis.

    PubMed

    Madi, K; Tozzi, G; Zhang, Q H; Tong, J; Cossey, A; Au, A; Hollis, D; Hild, F

    2013-09-01

    Measurements of three-dimensional displacements in a scaffold implant under uniaxial compression have been obtained by two digital volume correlation (DVC) methods, and compared with those obtained from micro-finite element models. The DVC methods were based on two approaches, a local approach which registers independent small volumes and yields discontinuous displacement fields; and a global approach where the registration is performed on the whole volume of interest, leading to continuous displacement fields. A customised mini-compression device was used to perform in situ step-wise compression of the scaffold within a micro-computed tomography (μCT) chamber, and the data were collected at steps of interest. Displacement uncertainties, ranging from 0.006 to 0.02 voxel (i.e. 0.12-0.4 μm), with a strain uncertainty between 60 and 600 με, were obtained with a spatial resolution of 32 voxels using both approaches, although the global approach has lower systematic errors. Reduced displacement and strain uncertainties may be obtained using the global approach by increasing the element size; and using the local approach by increasing the number of intermediary sub-volumes. Good agreements between the results from the DVC measurements and the FE simulations were obtained in the primary loading direction as well as in the lateral directions. This study demonstrates that volumetric strain measurements can be obtained successfully using DVC, which may be a useful tool to investigate mechanical behaviour of porous implants.

  1. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    NASA Astrophysics Data System (ADS)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a

  2. Towards drug repositioning: a unified computational framework for integrating multiple aspects of drug similarity and disease similarity.

    PubMed

    Zhang, Ping; Wang, Fei; Hu, Jianying

    2014-01-01

    In response to the high cost and high risk associated with traditional de novo drug discovery, investigation of potential additional uses for existing drugs, also known as drug repositioning, has attracted increasing attention from both the pharmaceutical industry and the research community. In this paper, we propose a unified computational framework, called DDR, to predict novel drug-disease associations. DDR formulates the task of hypothesis generation for drug repositioning as a constrained nonlinear optimization problem. It utilizes multiple drug similarity networks, multiple disease similarity networks, and known drug-disease associations to explore potential new associations among drugs and diseases with no known links. A large-scale study was conducted using 799 drugs against 719 diseases. Experimental results demonstrated the effectiveness of the approach. In addition, DDR ranked drug and disease information sources based on their contributions to the prediction, thus paving the way for prioritizing multiple data sources and building more reliable drug repositioning models. Particularly, some of our novel predictions of drug-disease associations were supported by clinical trials databases, showing that DDR could serve as a useful tool in drug discovery to efficiently identify potential novel uses for existing drugs. PMID:25954437

  3. Towards Drug Repositioning: A Unified Computational Framework for Integrating Multiple Aspects of Drug Similarity and Disease Similarity

    PubMed Central

    Zhang, Ping; Wang, Fei; Hu, Jianying

    2014-01-01

    In response to the high cost and high risk associated with traditional de novo drug discovery, investigation of potential additional uses for existing drugs, also known as drug repositioning, has attracted increasing attention from both the pharmaceutical industry and the research community. In this paper, we propose a unified computational framework, called DDR, to predict novel drug-disease associations. DDR formulates the task of hypothesis generation for drug repositioning as a constrained nonlinear optimization problem. It utilizes multiple drug similarity networks, multiple disease similarity networks, and known drug-disease associations to explore potential new associations among drugs and diseases with no known links. A large-scale study was conducted using 799 drugs against 719 diseases. Experimental results demonstrated the effectiveness of the approach. In addition, DDR ranked drug and disease information sources based on their contributions to the prediction, thus paving the way for prioritizing multiple data sources and building more reliable drug repositioning models. Particularly, some of our novel predictions of drug-disease associations were supported by clinical trials databases, showing that DDR could serve as a useful tool in drug discovery to efficiently identify potential novel uses for existing drugs. PMID:25954437

  4. Towards drug repositioning: a unified computational framework for integrating multiple aspects of drug similarity and disease similarity.

    PubMed

    Zhang, Ping; Wang, Fei; Hu, Jianying

    2014-01-01

    In response to the high cost and high risk associated with traditional de novo drug discovery, investigation of potential additional uses for existing drugs, also known as drug repositioning, has attracted increasing attention from both the pharmaceutical industry and the research community. In this paper, we propose a unified computational framework, called DDR, to predict novel drug-disease associations. DDR formulates the task of hypothesis generation for drug repositioning as a constrained nonlinear optimization problem. It utilizes multiple drug similarity networks, multiple disease similarity networks, and known drug-disease associations to explore potential new associations among drugs and diseases with no known links. A large-scale study was conducted using 799 drugs against 719 diseases. Experimental results demonstrated the effectiveness of the approach. In addition, DDR ranked drug and disease information sources based on their contributions to the prediction, thus paving the way for prioritizing multiple data sources and building more reliable drug repositioning models. Particularly, some of our novel predictions of drug-disease associations were supported by clinical trials databases, showing that DDR could serve as a useful tool in drug discovery to efficiently identify potential novel uses for existing drugs.

  5. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    SciTech Connect

    Williams, P.T.

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.

  6. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  7. Genome-Wide Computational Analysis of Dioxin Response Element Location and Distribution in the Human, Mouse and Rat Genomes

    PubMed Central

    Dere, Edward; Forgacs, Agnes L; Zacharewski, Timothy R; Burgoon, Lyle D

    2014-01-01

    The aryl hydrocarbon receptor (AhR) mediates responses elicited by 2,3,7,8-tetrachlorodibenzo-p-dioxin by binding to dioxin response elements (DRE) containing the core consensus sequence 5′-GCGTG-3′. The human, mouse and rat genomes were computationally searched for all DRE cores. Each core was then extended by 7bp upstream and downstream, and matrix similarity (MS) scores for the resulting 19bp DRE sequences were calculated using a revised position weight matrix constructed from bona fide functional DREs. In total, 72,318 human, 70,720 mouse and 88,651 rat high-scoring (MS ≥ 0.8437) putative DREs were identified. Gene encoding intragenic DNA regions had ~1.6-times more putative DREs than the non-coding intergenic DNA regions. Furthermore, the promoter region spanning ±1.5kb of a TSS had the highest density of putative DREs within the genome. Chromosomal analysis found that the putative DRE densities of chromosomes X and Y were significantly lower than the mean chromosomal density. Interestingly, the 10kb upstream promoter region on chromosome X of the genomes were significantly less dense than the chromosomal mean, while the same region in chromosome Y was the most dense. In addition to providing a detailed genomic map of all DRE cores in the human, mouse and rat genomes, these data will further aid the elucidation of AhR-mediated signal transduction. PMID:21370876

  8. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

    DOEpatents

    Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

    2001-01-16

    Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of

  9. Examining the Minimal Required Elements of a Computer-Tailored Intervention Aimed at Dietary Fat Reduction: Results of a Randomized Controlled Dismantling Study

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Oenema, Anke; Dagnelie, Pieter C.; Brug, Johannes

    2008-01-01

    This study investigated the minimally required feedback elements of a computer-tailored dietary fat reduction intervention to be effective in improving fat intake. In all 588 Healthy Dutch adults were randomly allocated to one of four conditions in an randomized controlled trial: (i) feedback on dietary fat intake [personal feedback (P feedback)],…

  10. A 3-D finite-element computation of eddy currents and losses in laminated iron cores allowing for electric and magnetic anisotropy

    SciTech Connect

    Silva, V.C.; Meunier, G.; Foggia, A.

    1995-05-01

    A 3-D scheme based on the Finite Element Method, which takes electric and magnetic anisotropy into consideration, has been developed for computing eddy-current losses caused by stray magnetic fields in laminated iron cores of large transformers and generators. The model is applied to some laminated iron-core samples and compared with equivalent solid-iron cases.

  11. Algorithms for high aspect ratio oriented triangulations

    NASA Technical Reports Server (NTRS)

    Posenau, Mary-Anne K.

    1995-01-01

    Grid generation plays an integral part in the solution of computational fluid dynamics problems for aerodynamics applications. A major difficulty with standard structured grid generation, which produces quadrilateral (or hexahedral) elements with implicit connectivity, has been the requirement for a great deal of human intervention in developing grids around complex configurations. This has led to investigations into unstructured grids with explicit connectivities, which are primarily composed of triangular (or tetrahedral) elements, although other subdivisions of convex cells may be used. The existence of large gradients in the solution of aerodynamic problems may be exploited to reduce the computational effort by using high aspect ratio elements in high gradient regions. However, the heuristic approaches currently in use do not adequately address this need for high aspect ratio unstructured grids. High aspect ratio triangulations very often produce the large angles that are to be avoided. Point generation techniques based on contour or front generation are judged to be the most promising in terms of being able to handle complicated multiple body objects, with this technique lending itself well to adaptivity. The eventual goal encompasses several phases: first, a partitioning phase, in which the Voronoi diagram of a set of points and line segments (the input set) will be generated to partition the input domain; second, a contour generation phase in which body-conforming contours are used to subdivide the partition further as well as introduce the foundation for aspect ratio control, and; third, a Steiner triangulation phase in which points are added to the partition to enable triangulation while controlling angle bounds and aspect ratio. This provides a combination of the advancing front/contour techniques and refinement. By using a front, aspect ratio can be better controlled. By using refinement, bounds on angles can be maintained, while attempting to minimize

  12. The Use of Computer Games as an Educational Tool: Identification of Appropriate Game Types and Game Elements.

    ERIC Educational Resources Information Center

    Amory, Alan; Naicker, Kevin; Vincent, Jacky; Adams, Claudia

    1999-01-01

    Describes research with college students that investigated commercial game types and game elements to determine what would be suitable for education. Students rated logic, memory, visualization, and problem solving as important game elements that are used to develop a model that links pedagogical issues with game elements. (Author/LRW)

  13. Effect of alloying elements on passivity and breakdown of passivity of Fe- and Ni-based alloys mechanistics aspects. Annual report, August 1, 1991--July 31, 1992

    SciTech Connect

    Szklarska-Amialowska, Z.

    1992-06-01

    On the basis of the literature data and the current results, the mechanism of pitting corrosion of Al-alloys is proposed. An assumption is made that the transport of Cl- ions through defects in the passive film of aluminum an aluminum alloys is not a rate determining step in pitting. The pit development is controlled by the solubility of the oxidized alloying elements in acid solutions. A very good correlation was found between the pitting potential and the oxidized alloying elements for metastable Al-Cr, Al-Zr, Al-W, and Al-Zn alloys. We expect that the effect of oxidized alloying elements in other passive alloys will be the same as in Al-alloys. To verify this hypothesis, susceptibility to pitting in the function of alloying elements in the binary alloys and the composition of the oxide film has to be measured. We propose studying Fe- and Ni-alloys produced by a sputtering deposition method. Using this method one-phaseous alloy can be obtained, even when the two metals are immiscible using conventional methods. Another advantage to studying sputtered alloys is to find new materials with superior resistance to localized corrosion.

  14. Biological Aspects of Computer Virology

    NASA Astrophysics Data System (ADS)

    Vlachos, Vasileios; Spinellis, Diomidis; Androutsellis-Theotokis, Stefanos

    Recent malware epidemics proved beyond any doubt that frightful predictions of fast-spreading worms have been well founded. While we can identify and neutralize many types of malicious code, often we are not able to do that in a timely enough manner to suppress its uncontrolled propagation. In this paper we discuss the decisive factors that affect the propagation of a worm and evaluate their effectiveness.

  15. Modeling of Interior Ballistic Gas-Solid Flow Using a Coupled Computational Fluid Dynamics-Discrete Element Method.

    PubMed

    Cheng, Cheng; Zhang, Xiaobing

    2013-05-01

    In conventional models for two-phase reactive flow of interior ballistic, the dynamic collision phenomenon of particles is neglected or empirically simplified. However, the particle collision between particles may play an important role in dilute two-phase flow because the distribution of particles is extremely nonuniform. The collision force may be one of the key factors to influence the particle movement. This paper presents the CFD-DEM approach for simulation of interior ballistic two-phase flow considering the dynamic collision process. The gas phase is treated as a Eulerian continuum and described by a computational fluid dynamic method (CFD). The solid phase is modeled by discrete element method (DEM) using a soft sphere approach for the particle collision dynamic. The model takes into account grain combustion, particle-particle collisions, particle-wall collisions, interphase drag and heat transfer between gas and solid phases. The continuous gas phase equations are discretized in finite volume form and solved by the AUSM+-up scheme with the higher order accurate reconstruction method. Translational and rotational motions of discrete particles are solved by explicit time integrations. The direct mapping contact detection algorithm is used. The multigrid method is applied in the void fraction calculation, the contact detection procedure, and CFD solving procedure. Several verification tests demonstrate the accuracy and reliability of this approach. The simulation of an experimental igniter device in open air shows good agreement between the model and experimental measurements. This paper has implications for improving the ability to capture the complex physics phenomena of two-phase flow during the interior ballistic cycle and to predict dynamic collision phenomena at the individual particle scale.

  16. Computational finite element bone mechanics accurately predicts mechanical competence in the human radius of an elderly population.

    PubMed

    Mueller, Thomas L; Christen, David; Sandercott, Steve; Boyd, Steven K; van Rietbergen, Bert; Eckstein, Felix; Lochmüller, Eva-Maria; Müller, Ralph; van Lenthe, G Harry

    2011-06-01

    High-resolution peripheral quantitative computed tomography (HR-pQCT) is clinically available today and provides a non-invasive measure of 3D bone geometry and micro-architecture with unprecedented detail. In combination with microarchitectural finite element (μFE) models it can be used to determine bone strength using a strain-based failure criterion. Yet, images from only a relatively small part of the radius are acquired and it is not known whether the region recommended for clinical measurements does predict forearm fracture load best. Furthermore, it is questionable whether the currently used failure criterion is optimal because of improvements in image resolution, changes in the clinically measured volume of interest, and because the failure criterion depends on the amount of bone present. Hence, we hypothesized that bone strength estimates would improve by measuring a region closer to the subchondral plate, and by defining a failure criterion that would be independent of the measured volume of interest. To answer our hypotheses, 20% of the distal forearm length from 100 cadaveric but intact human forearms was measured using HR-pQCT. μFE bone strength was analyzed for different subvolumes, as well as for the entire 20% of the distal radius length. Specifically, failure criteria were developed that provided accurate estimates of bone strength as assessed experimentally. It was shown that distal volumes were better in predicting bone strength than more proximal ones. Clinically speaking, this would argue to move the volume of interest for the HR-pQCT measurements even more distally than currently recommended by the manufacturer. Furthermore, new parameter settings using the strain-based failure criterion are presented providing better accuracy for bone strength estimates.

  17. Numerical Stochastic Homogenization Method and Multiscale Stochastic Finite Element Method - A Paradigm for Multiscale Computation of Stochastic PDEs

    SciTech Connect

    X. Frank Xu

    2010-03-30

    Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will

  18. Computed-tomography-based finite-element models of long bones can accurately capture strain response to bending and torsion.

    PubMed

    Varghese, Bino; Short, David; Penmetsa, Ravi; Goswami, Tarun; Hangartner, Thomas

    2011-04-29

    Finite element (FE) models of long bones constructed from computed-tomography (CT) data are emerging as an invaluable tool in the field of bone biomechanics. However, the performance of such FE models is highly dependent on the accurate capture of geometry and appropriate assignment of material properties. In this study, a combined numerical-experimental study is performed comparing FE-predicted surface strains with strain-gauge measurements. Thirty-six major, cadaveric, long bones (humerus, radius, femur and tibia), which cover a wide range of bone sizes, were tested under three-point bending and torsion. The FE models were constructed from trans-axial volumetric CT scans, and the segmented bone images were corrected for partial-volume effects. The material properties (Young's modulus for cortex, density-modulus relationship for trabecular bone and Poisson's ratio) were calibrated by minimizing the error between experiments and simulations among all bones. The R(2) values of the measured strains versus load under three-point bending and torsion were 0.96-0.99 and 0.61-0.99, respectively, for all bones in our dataset. The errors of the calculated FE strains in comparison to those measured using strain gauges in the mechanical tests ranged from -6% to 7% under bending and from -37% to 19% under torsion. The observation of comparatively low errors and high correlations between the FE-predicted strains and the experimental strains, across the various types of bones and loading conditions (bending and torsion), validates our approach to bone segmentation and our choice of material properties.

  19. Hydropower and Environmental Resource Assessment (HERA): a computational tool for the assessment of the hydropower potential of watersheds considering engineering and socio-environmental aspects.

    NASA Astrophysics Data System (ADS)

    Martins, T. M.; Kelman, R.; Metello, M.; Ciarlini, A.; Granville, A. C.; Hespanhol, P.; Castro, T. L.; Gottin, V. M.; Pereira, M. V. F.

    2015-12-01

    The hydroelectric potential of a river is proportional to its head and water flows. Selecting the best development alternative for Greenfield projects watersheds is a difficult task, since it must balance demands for infrastructure, especially in the developing world where a large potential remains unexplored, with environmental conservation. Discussions usually diverge into antagonistic views, as in recent projects in the Amazon forest, for example. This motivates the construction of a computational tool that will support a more qualified debate regarding development/conservation options. HERA provides the optimal head division partition of a river considering technical, economic and environmental aspects. HERA has three main components: (i) pre-processing GIS of topographic and hydrologic data; (ii) automatic engineering and equipment design and budget estimation for candidate projects; (iii) translation of division-partition problem into a mathematical programming model. By integrating an automatic calculation with geoprocessing tools, cloud computation and optimization techniques, HERA makes it possible countless head partition division alternatives to be intrinsically compared - a great advantage with respect to traditional field surveys followed by engineering design methods. Based on optimization techniques, HERA determines which hydro plants should be built, including location, design, technical data (e.g. water head, reservoir area and volume, engineering design (dam, spillways, etc.) and costs). The results can be visualized in the HERA interface, exported to GIS software, Google Earth or CAD systems. HERA has a global scope of application since the main input data area a Digital Terrain Model and water inflows at gauging stations. The objective is to contribute to an increased rationality of decisions by presenting to the stakeholders a clear and quantitative view of the alternatives, their opportunities and threats.

  20. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  1. Energy Finite Element Analysis for Computing the High Frequency Vibration of the Aluminum Testbed Cylinder and Correlating the Results to Test Data

    NASA Technical Reports Server (NTRS)

    Vlahopoulos, Nickolas

    2005-01-01

    The Energy Finite Element Analysis (EFEA) is a finite element based computational method for high frequency vibration and acoustic analysis. The EFEA solves with finite elements governing differential equations for energy variables. These equations are developed from wave equations. Recently, an EFEA method for computing high frequency vibration of structures either in vacuum or in contact with a dense fluid has been presented. The presence of fluid loading has been considered through added mass and radiation damping. The EFEA developments were validated by comparing EFEA results to solutions obtained by very dense conventional finite element models and solutions from classical techniques such as statistical energy analysis (SEA) and the modal decomposition method for bodies of revolution. EFEA results have also been compared favorably with test data for the vibration and the radiated noise generated by a large scale submersible vehicle. The primary variable in EFEA is defined as the time averaged over a period and space averaged over a wavelength energy density. A joint matrix computed from the power transmission coefficients is utilized for coupling the energy density variables across any discontinuities, such as change of plate thickness, plate/stiffener junctions etc. When considering the high frequency vibration of a periodically stiffened plate or cylinder, the flexural wavelength is smaller than the interval length between two periodic stiffeners, therefore the stiffener stiffness can not be smeared by computing an equivalent rigidity for the plate or cylinder. The periodic stiffeners must be regarded as coupling components between periodic units. In this paper, Periodic Structure (PS) theory is utilized for computing the coupling joint matrix and for accounting for the periodicity characteristics.

  2. A 3D finite-element computation of eddy currents and losses in the stator end laminations of large synchronous machines

    SciTech Connect

    Silva, V.C.; Meunier, G.; Foggia, A.

    1996-05-01

    Eddy current losses due to axial fluxes are computed in the stator end laminations of a salient-pole synchronous machine at open-circuit operating condition. The calculation is carried out with the aid of a 3D finite-element package which uses a linear T-{phi} formulation. The domain spans a full pole pitch of the machine. The flux densities computed in the end region at points outside the stator core are compared with experimental measurements. The results and the limitations of the model are discussed.

  3. The effect of in situ/in vitro three-dimensional quantitative computed tomography image voxel size on the finite element model of human vertebral cancellous bone.

    PubMed

    Lu, Yongtao; Engelke, Klaus; Glueer, Claus-C; Morlock, Michael M; Huber, Gerd

    2014-11-01

    Quantitative computed tomography-based finite element modeling technique is a promising clinical tool for the prediction of bone strength. However, quantitative computed tomography-based finite element models were created from image datasets with different image voxel sizes. The aim of this study was to investigate whether there is an influence of image voxel size on the finite element models. In all 12 thoracolumbar vertebrae were scanned prior to autopsy (in situ) using two different quantitative computed tomography scan protocols, which resulted in image datasets with two different voxel sizes (0.29 × 0.29 × 1.3 mm(3) vs 0.18 × 0.18 × 0.6 mm(3)). Eight of them were scanned after autopsy (in vitro) and the datasets were reconstructed with two voxel sizes (0.32 × 0.32 × 0.6 mm(3) vs. 0.18 × 0.18 × 0.3 mm(3)). Finite element models with cuboid volume of interest extracted from the vertebral cancellous part were created and inhomogeneous bilinear bone properties were defined. Axial compression was simulated. No effect of voxel size was detected on the apparent bone mineral density for both the in situ and in vitro cases. However, the apparent modulus and yield strength showed significant differences in the two voxel size group pairs (in situ and in vitro). In conclusion, the image voxel size may have to be considered when the finite element voxel modeling technique is used in clinical applications.

  4. Verification of a non-hydrostatic dynamical core using the horizontal spectral element method and vertical finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-11-01

    The non-hydrostatic (NH) compressible Euler equations for dry atmosphere were solved in a simplified two-dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. By using horizontal SEM, which decomposes the physical domain into smaller pieces with a small communication stencil, a high level of scalability can be achieved. By using vertical FDM, an easy method for coupling the dynamics and existing physics packages can be provided. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind-biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative and integral terms. For temporal integration, a time-split, third-order Runge-Kutta (RK3) integration technique was applied. The Euler equations that were used here are in flux form based on the hydrostatic pressure vertical coordinate. The equations are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate was implemented in this model. We validated the model by conducting the widely used standard tests: linear hydrostatic mountain wave, tracer advection, and gravity wave over the Schär-type mountain, as well as density current, inertia-gravity wave, and rising thermal bubble. The results from these tests demonstrated that the model using the horizontal SEM and the vertical FDM is accurate and robust provided sufficient diffusion is applied. The results with various horizontal resolutions also showed convergence of second-order accuracy due to the accuracy of the time integration scheme and that of the vertical direction, although high-order basis functions were used in the horizontal. By using the 2-D slice model, we effectively showed that the combined spatial

  5. Generic element processor (application to nonlinear analysis)

    NASA Technical Reports Server (NTRS)

    Stanley, Gary

    1989-01-01

    The focus here is on one aspect of the Computational Structural Mechanics (CSM) Testbed: finite element technology. The approach involves a Generic Element Processor: a command-driven, database-oriented software shell that facilitates introduction of new elements into the testbed. This shell features an element-independent corotational capability that upgrades linear elements to geometrically nonlinear analysis, and corrects the rigid-body errors that plague many contemporary plate and shell elements. Specific elements that have been implemented in the Testbed via this mechanism include the Assumed Natural-Coordinate Strain (ANS) shell elements, developed with Professor K. C. Park (University of Colorado, Boulder), a new class of curved hybrid shell elements, developed by Dr. David Kang of LPARL (formerly a student of Professor T. Pian), other shell and solid hybrid elements developed by NASA personnel, and recently a repackaged version of the workhorse shell element used in the traditional STAGS nonlinear shell analysis code. The presentation covers: (1) user and developer interfaces to the generic element processor, (2) an explanation of the built-in corotational option, (3) a description of some of the shell-elements currently implemented, and (4) application to sample nonlinear shell postbuckling problems.

  6. Biomechanical aspects of segmented arch mechanics combined with power arm for controlled anterior tooth movement: A three-dimensional finite element study

    PubMed Central

    Ozaki, Hiroya; Tominaga, Jun-ya; Hamanaka, Ryo; Sumi, Mayumi; Chiang, Pao-Chang; Tanaka, Motohiro; Koga, Yoshiyuki

    2015-01-01

    The porpose of this study was to determine the optimal length of power arms for achieving controlled anterior tooth movement in segmented arch mechanics combined with power arm. A three-dimensional finite element method was applied for the simulation of en masse anterior tooth retraction in segmented power arm mechanics. The type of tooth movement, namely, the location of center of rotation of the maxillary central incisor in association with power arm length, was calculated after the retraction force was applied. When a 0.017 × 0.022-in archwire was inserted into the 0.018-in slot bracket, bodily movement was obtained at 9.1 mm length of power arm, namely, at the level of 1.8 mm above the center of resistance. In case a 0.018 × 0.025-in full-size archwire was used, bodily movement of the tooth was produced at the power arm length of 7.0 mm, namely, at the level of 0.3 mm below the center of resistance. Segmented arch mechanics required shorter length of power arms for achieving any type of controlled anterior tooth movement as compared to sliding mechanics. Therefore, this space closing mechanics could be widely applied even for the patients whose gingivobuccal fold is shallow. The segmented arch mechanics combined with power arm could provide higher amount of moment-to-force ratio sufficient for controlled anterior tooth movement without generating friction, and vertical forces when applying retraction force parallel to the occlusal plane. It is, therefore, considered that the segmented power arm mechanics has a simple appliance design and allows more efficient and controllable tooth movement. PMID:25610497

  7. Biomechanical aspects of segmented arch mechanics combined with power arm for controlled anterior tooth movement: A three-dimensional finite element study.

    PubMed

    Ozaki, Hiroya; Tominaga, Jun-Ya; Hamanaka, Ryo; Sumi, Mayumi; Chiang, Pao-Chang; Tanaka, Motohiro; Koga, Yoshiyuki; Yoshida, Noriaki

    2015-01-01

    The porpose of this study was to determine the optimal length of power arms for achieving controlled anterior tooth movement in segmented arch mechanics combined with power arm. A three-dimensional finite element method was applied for the simulation of en masse anterior tooth retraction in segmented power arm mechanics. The type of tooth movement, namely, the location of center of rotation of the maxillary central incisor in association with power arm length, was calculated after the retraction force was applied. When a 0.017 × 0.022-in archwire was inserted into the 0.018-in slot bracket, bodily movement was obtained at 9.1 mm length of power arm, namely, at the level of 1.8 mm above the center of resistance. In case a 0.018 × 0.025-in full-size archwire was used, bodily movement of the tooth was produced at the power arm length of 7.0 mm, namely, at the level of 0.3 mm below the center of resistance. Segmented arch mechanics required shorter length of power arms for achieving any type of controlled anterior tooth movement as compared to sliding mechanics. Therefore, this space closing mechanics could be widely applied even for the patients whose gingivobuccal fold is shallow. The segmented arch mechanics combined with power arm could provide higher amount of moment-to-force ratio sufficient for controlled anterior tooth movement without generating friction, and vertical forces when applying retraction force parallel to the occlusal plane. It is, therefore, considered that the segmented power arm mechanics has a simple appliance design and allows more efficient and controllable tooth movement.

  8. Effect of damper on overall and blade-element performance of a compressor rotor having a tip speed of 1151 feet per second and an aspect ratio of 3.6

    NASA Technical Reports Server (NTRS)

    Lewis, G. W.; Hager, R. D.

    1974-01-01

    The overall and blade-element performance of two configurations of a moderately high aspect ratio transonic compressor rotor are presented. The subject rotor has conventional blade dampers. The performance is compared with a rotor utilizing dual wire friction dampers. At design speed the subject achieved a pressure ratio of 1.52 and efficiency of 0.89 at a near design weight flow of 72.1 pounds per second. The rotor with wire dampers gave consistently higher pressure ratios at each speed, but efficiencies for the two rotors were about the same. Stall margin for the subject rotor was 20.4 percent, but for the wire damped rotor only 4.0 percent.

  9. Educational aspects of molecular simulation

    NASA Astrophysics Data System (ADS)

    Allen, Michael P.

    This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.

  10. Coronary arterial dynamics computation with medical-image-based time-dependent anatomical models and element-based zero-stress state estimates

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Torii, Ryo; Takagi, Hirokazu; Tezduyar, Tayfun E.; Xu, Xiao Y.

    2014-10-01

    We propose a method for coronary arterial dynamics computation with medical-image-based time-dependent anatomical models. The objective is to improve the computational analysis of coronary arteries for better understanding of the links between the atherosclerosis development and mechanical stimuli such as endothelial wall shear stress and structural stress in the arterial wall. The method has two components. The first one is element-based zero-stress (ZS) state estimation, which is an alternative to prestress calculation. The second one is a "mixed ZS state" approach, where the ZS states for different elements in the structural mechanics mesh are estimated with reference configurations based on medical images coming from different instants within the cardiac cycle. We demonstrate the robustness of the method in a patient-specific coronary arterial dynamics computation where the motion of a thin strip along the arterial surface and two cut surfaces at the arterial ends is specified to match the motion extracted from the medical images.

  11. Field, model, and computer simulation study of some aspects of the origin and distribution of Colorado Plateau-type uranium deposits

    USGS Publications Warehouse

    Ethridge, F.G.; Sunada, D.K.; Tyler, Noel; Andrews, Sarah

    1982-01-01

    Numerous hypotheses have been proposed to account for the nature and distribution of tabular uranium and vanadium-uranium deposits of the Colorado Plateau. In one of these hypotheses it is suggested that the deposits resulted from geochemical reactions at the interface between a relatively stagnant groundwater solution and a dynamic, ore-carrying groundwater solution which permeated the host sandstones (Shawe, 1956; Granger, et al., 1961; Granger, 1968, 1976; and Granger and Warren, 1979). The study described here was designed to investigate some aspects of this hypothesis, particularly the nature of fluid flow in sands and sandstones, the nature and distribution of deposits, and the relations between the deposits and the host sandstones. The investigation, which was divided into three phases, involved physical model, field, and computer simulation studies. During the initial phase of the investigation, physical model studies were conducted in porous-media flumes. These studies verified the fact that humic acid precipitates could form at the interface between a humic acid solution and a potassium aluminum sulfate solution and that the nature and distribution of these precipitates were related to flow phenomena and to the nature and distribution of the host porous-media. During the second phase of the investigation field studies of permeability and porosity patterns in Holocene stream deposits were investigated and the data obtained were used to design more realistic porous media models. These model studies, which simulated actual stream deposits, demonstrated that precipitates possess many characteristics, in terms of their nature and relation to host sandstones, that are similar to ore deposits of the Colorado Plateau. The final phase of the investigation involved field studies of actual deposits, additional model studies in a large indoor flume, and computer simulation studies. The field investigations provided an up-to-date interpretation of the depositional

  12. Administrative Aspects of Human Experimentation.

    ERIC Educational Resources Information Center

    Irvine, George W.

    1992-01-01

    The following administrative aspects of scientific experimentation with human subjects are discussed: the definition of human experimentation; the distinction between experimentation and treatment; investigator responsibility; documentation; the elements and principles of informed consent; and the administrator's role in establishing and…

  13. The Impact of School Wide and Classroom Elements on Instructional Computing: A Case Study. Research Report #4.

    ERIC Educational Resources Information Center

    de Acosta, Martha

    This paper utilizes case study findings of the implementation of educational computing in two schools, one elementary school and one fifth- through sixth-grade school, to reflect on recurrent patterns that account for the slow pace of change in instruction. In particular the study focused on the structural arrangements of teachers' work and the…

  14. 3-D magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on SMP computers - Part I: forward problem and parameter Jacobians

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    We have developed an algorithm, which we call HexMT, for 3-D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permit incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used throughout, including the forward solution, parameter Jacobians and model parameter update. In Part I, the forward simulator and Jacobian calculations are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequencies or small material admittivities, the E-field requires divergence correction. With the help of Hodge decomposition, the correction may be applied in one step after the forward solution is calculated. This allows accurate E-field solutions in dielectric air. The system matrix factorization and source vector solutions are computed using the MKL PARDISO library, which shows good scalability through 24 processor cores. The factorized matrix is used to calculate the forward response as well as the Jacobians of electromagnetic (EM) field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure, several synthetic topographic models and the natural topography of Mount Erebus in Antarctica. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of EM waves normal to the slopes at high frequencies. Run-time tests of the parallelized algorithm indicate that for meshes as large as 176 × 176 × 70 elements, MT forward responses and Jacobians can be calculated in ˜1.5 hr per frequency. Together with an efficient inversion parameter step described in Part II, MT inversion problems of 200-300 stations are computable with total run times

  15. Finite-element nonlinear transient response computer programs PLATE 1 and CIVM-PLATE 1 for the analysis of panels subjected to impulse or impact loads

    NASA Technical Reports Server (NTRS)

    Spilker, R. L.; Witmer, E. A.; French, S. E.; Rodal, J. J. A.

    1980-01-01

    Two computer programs are described for predicting the transient large deflection elastic viscoplastic responses of thin single layer, initially flat unstiffened or integrally stiffened, Kirchhoff-Lov ductile metal panels. The PLATE 1 program pertains to structural responses produced by prescribed externally applied transient loading or prescribed initial velocity distributions. The collision imparted velocity method PLATE 1 program concerns structural responses produced by impact of an idealized nondeformable fragment. Finite elements are used to represent the structure in both programs. Strain hardening and strain rate effects of initially isotropic material are considered.

  16. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  17. Robust and portable capacity computing method for many finite element analyses of a high-fidelity crustal structure model aimed for coseismic slip estimation

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hirahara, Kazuro; Hyodo, Mamoru; Hori, Takane; Hori, Muneo

    2016-09-01

    Computation of many Green's functions (GFs) in finite element (FE) analyses of crustal deformation is an essential technique in inverse analyses of coseismic slip estimations. In particular, analysis based on a high-resolution FE model (high-fidelity model) is expected to contribute to the construction of a community standard FE model and benchmark solution. Here, we propose a naive but robust and portable capacity computing method to compute many GFs using a high-fidelity model, assuming that various types of PC clusters are used. The method is based on the master-worker model, implemented using the Message Passing Interface (MPI), to perform robust and efficient input/output operations. The method was applied to numerical experiments of coseismic slip estimation in the Tohoku region of Japan; comparison of the estimated results with those generated using lower-fidelity models revealed the benefits of using a high-fidelity FE model in coseismic slip distribution estimation. Additionally, the proposed method computes several hundred GFs more robustly and efficiently than methods without the master-worker model and MPI.

  18. Object-oriented design and implementation of CFDLab: a computer-assisted learning tool for fluid dynamics using dual reciprocity boundary element methodology

    NASA Astrophysics Data System (ADS)

    Friedrich, J.

    1999-08-01

    As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.

  19. Survey of Unsteady Computational Aerodynamics for Horizontal Axis Wind Turbines

    NASA Astrophysics Data System (ADS)

    Frunzulicǎ, F.; Dumitrescu, H.; Cardoş, V.

    2010-09-01

    We present a short review of aerodynamic computational models for horizontal axis wind turbines (HAWT). Models presented have a various level of complexity to calculate aerodynamic loads on rotor of HAWT, starting with the simplest blade element momentum (BEM) and ending with the complex model of Navier-Stokes equations. Also, we present some computational aspects of these models.

  20. Recommended data elements for the descriptive cataloging of computer-based educational materials in the health sciences.

    PubMed

    Lyon-Hartmann, B; Goldstein, C M

    1978-01-01

    A large part of the mission of the National Library of Medicine is to collect, index, and disseminate the world's biomedical literature. Until recently, this related only to serial and monographic material, but as new forms of information appear responsibility for bibliographic control of these also must be assumed by the National Library of Medicine. This paper briefly describes the type of information that will be necessary before descriptive cataloging of computer-based educational materials can be attempted. PMID:10306980

  1. Computation of Mechanical Properties of a Poly-(Styrene-Butadiene-Styrene) Copolymer using a Mixed Finite Element Approach

    NASA Astrophysics Data System (ADS)

    Baeurle, Stephan A.; Fredrickson, Glenn H.; Gusev, Andrei A.

    2004-03-01

    Despite of several decades of research, the nature of linear elasticity in microphase-separated copolymers with chemically connected glass-rubber phases is still not fully understood. In this presentation we discuss the results of an investigation of the linear elastic properties of a poly-(styrene-butadiene-styrene) triblock copolymer using a mixed finite element approach. The technique permits to deal with phases of full incompressibility as well as phases of near incompressibility as they occur in this two-component system. Strikingly and contrary to the common belief, we find that the continuum description is accurate and that no additional detailed molecular information is needed to reproduce the available linear elastic experimental data. The anomalous Poisson's ratio of the polybutadiene phase of 0.37, determined by previous authors and attributed to molecular characteristics of the polybutadiene phase, is found to be related to end-effect errors made in their tensile and torsional experiments. We also test the suitability of several semi-phenomenological models in reproducing the experimental measurements. We find that some of the methods provide reliable results of accuracy comparable to our mixed finite element approach.

  2. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

    SciTech Connect

    Carey, D.C.

    1999-12-09

    TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.

  3. Computer simulation of stress distribution in the metatarsals at different inversion landing angles using the finite element method.

    PubMed

    Gu, Y D; Ren, X J; Li, J S; Lake, M J; Zhang, Q Y; Zeng, Y J

    2010-06-01

    Metatarsal fracture is one of the most common foot injuries, particularly in athletes and soldiers, and is often associated with landing in inversion. An improved understanding of deformation of the metatarsals under inversion landing conditions is essential in the diagnosis and prevention of metatarsal injuries. In this work, a detailed three-dimensional (3D) finite element foot model was developed to investigate the effect of inversion positions on stress distribution and concentration within the metatarsals. The predicted plantar pressure distribution showed good agreement with data from controlled biomechanical tests. The deformation and stresses of the metatarsals during landing at different inversion angles (normal landing, 10 degree inversion and 20 degree inversion angles) were comparatively studied. The results showed that in the lateral metatarsals stress increased while in the medial metatarsals stress decreased with the angle of inversion. The peak stress point was found to be near the proximal part of the fifth metatarsal, which corresponds with reported clinical observations of metatarsal injuries.

  4. Computational simulation of the bone remodeling using the finite element method: an elastic-damage theory for small displacements

    PubMed Central

    2013-01-01

    Background The resistance of the bone against damage by repairing itself and adapting to environmental conditions is its most important property. These adaptive changes are regulated by physiological process commonly called the bone remodeling. Better understanding this process requires that we apply the theory of elastic-damage under the hypothesis of small displacements to a bone structure and see its mechanical behavior. Results The purpose of the present study is to simulate a two dimensional model of a proximal femur by taking into consideration elastic-damage and mechanical stimulus. Here, we present a mathematical model based on a system of nonlinear ordinary differential equations and we develop the variational formulation for the mechanical problem. Then, we implement our mathematical model into the finite element method algorithm to investigate the effect of the damage. Conclusion The results are consistent with the existing literature which shows that the bone stiffness drops in damaged bone structure under mechanical loading. PMID:23663260

  5. On prediction of the strength levels and failure patterns of human vertebrae using quantitative computed tomography (QCT)-based finite element method.

    PubMed

    Mirzaei, Majid; Zeinali, Ahad; Razmjoo, Arash; Nazemi, Majid

    2009-08-01

    This paper presents an effective patient-specific approach for prediction of failure initiation and growth in human vertebra using the general framework of the quantitative computed tomography (QCT)-based finite element method (FEM). The studies were carried out on 13 vertebrae (lumbar and thoracic), excised from 3 cadavers with the average age of 42 years old. Initially, 4 samples were QCT scanned and the images were directly converted into voxel-based 3D finite element models for linear and nonlinear analyses. The equivalent plastic strains obtained from the nonlinear analyses were used to predict the occurrence of local failures and development of the failure patterns. In the linear analyses, the strain energy density measure was used to identify the critical elements and predict the failure patterns. Subsequently, the samples were destructively tested in uniaxial compression and the experimental load-displacement diagrams were obtained. The plain radiographic images of the tested samples were also examined for observation of the failure patterns. In continuation, the presence of osteolytic defects in vertebrae was simulated by creation of artificial cavities within 9 remaining samples using a computer numerical control (CNC) milling machine. The same protocol was followed for scanning, modeling, and destructive testing of these samples. A strong correlation was found between the predicted and measured strengths. Finally, a typical vertebroplasty treatment was simulated by injection of low-viscosity bone cement within 3 compressed samples. The failure patterns and the associated load levels for these samples were also predicted using the QCT voxel-based FEM. PMID:19457486

  6. Discrete Element Modeling

    SciTech Connect

    Morris, J; Johnson, S

    2007-12-03

    The Distinct Element Method (also frequently referred to as the Discrete Element Method) (DEM) is a Lagrangian numerical technique where the computational domain consists of discrete solid elements which interact via compliant contacts. This can be contrasted with Finite Element Methods where the computational domain is assumed to represent a continuum (although many modern implementations of the FEM can accommodate some Distinct Element capabilities). Often the terms Discrete Element Method and Distinct Element Method are used interchangeably in the literature, although Cundall and Hart (1992) suggested that Discrete Element Methods should be a more inclusive term covering Distinct Element Methods, Displacement Discontinuity Analysis and Modal Methods. In this work, DEM specifically refers to the Distinct Element Method, where the discrete elements interact via compliant contacts, in contrast with Displacement Discontinuity Analysis where the contacts are rigid and all compliance is taken up by the adjacent intact material.

  7. The Geometry of the Kepler orbit / the perturbed Kepler orbit on Maupertuis Manifolds by minimizing the Scalar of the Riemann Curvaturte Tensor, aspects of the Kustaanheimo-Stiefel elements in Satellite Geodesy

    NASA Astrophysics Data System (ADS)

    Grafarend, Erik W.; You, Rey-Jer

    2013-04-01

    D. Hilbert and A. Einstein in 1916 derived the field equations of Gravitation from the functional "Scalar Curvature of the Riemann Curvature Tensor" in Spacetime. Ever since, Physicists as well as Geodesists have tried to derive the Kepler orbit / the perturbed Kepler orbit from the variational concept minimizing the spatial scalar curvature of the Riemann Curvature Tensor. The Maupertuis Principle of Least Action was the basis to derive the Newton equations of motion of a mass point, namely in the gravitational force field interpreted as a Geodesic Flow in the Maupertuis Manifold. The Maupertuis Manifold is a conformally flat threedimensional manifold with the gravitational potential as the factor of conformality. Here we derive the Kepler orbit / the perturbed Kepler orbit from the immersion of different type of Maupertuis Manifolds. Finally, we establish the link to Kustaanheimo-Stiefel elements in orbit dynamics. An example is the orbit computation of GPS satellites by perturbation theory of first order.

  8. A finite-element approach to the direct computation of relative cardiovascular pressure from time-resolved MR velocity data.

    PubMed

    Krittian, Sebastian B S; Lamata, Pablo; Michler, Christian; Nordsletten, David A; Bock, Jelena; Bradley, Chris P; Pitcher, Alex; Kilner, Philip J; Markl, Michael; Smith, Nic P

    2012-07-01

    The evaluation of cardiovascular velocities, their changes through the cardiac cycle and the consequent pressure gradients has the capacity to improve understanding of subject-specific blood flow in relation to adjacent soft tissue movements. Magnetic resonance time-resolved 3D phase contrast velocity acquisitions (4D flow) represent an emerging technology capable of measuring the cyclic changes of large scale, multi-directional, subject-specific blood flow. A subsequent evaluation of pressure differences in enclosed vascular compartments is a further step which is currently not directly available from such data. The focus of this work is to address this deficiency through the development of a novel simulation workflow for the direct computation of relative cardiovascular pressure fields. Input information is provided by enhanced 4D flow data and derived MR domain masking. The underlying methodology shows numerical advantages in terms of robustness, global domain composition, the isolation of local fluid compartments and a treatment of boundary conditions. This approach is demonstrated across a range of validation examples which are compared with analytic solutions. Four subject-specific test cases are subsequently run, showing good agreement with previously published calculations of intra-vascular pressure differences. The computational engine presented in this work contributes to non-invasive access to relative pressure fields, incorporates the effects of both blood flow acceleration and viscous dissipation, and enables enhanced evaluation of cardiovascular blood flow.

  9. Female Computer

    NASA Technical Reports Server (NTRS)

    1964-01-01

    Melba Roy heads the group of NASA mathematicians, known as 'computers,' who track the Echo satellites. Roy's computations help produce the orbital element timetables by which millions can view the satellite from Earth as it passes overhead.

  10. A computational study of the effects of chemical kinetics on high frequency combustion instability in a single-element rocket combustor

    NASA Astrophysics Data System (ADS)

    Whiteman, Alexander Thomas

    The objective of this research is to determine and analyze the effect a significant change in the speed of reaction (chemical kinetics) has on combustion instability in a single-element rocket combustor. This is carried out using computational fluid dynamics (CFD) and is a continuation of previous work on CFD modeling of combustion instability. Specifically, the goal is to determine whether the combustion will have the same, greater, or less instability with a significant decrease in the speed of reaction in the combustor. Other flow characteristics such as temperature, vorticity, and Rayleigh index are also analyzed and compared with those obtained with the original reaction speed. The combustor modeled is a single-element, longitudinal rocket combustor with a choked exhaust nozzle. The fuel is JP-8 and decomposed hydrogen peroxide is used as the oxidizer. The propellants are introduced to the combustion chamber coaxially and are non-premixed. Due to time and computational restraints, a number of simplifications are made to the computational model. These include using 2D axisymmetric modeling, using a single-step global combustion model, and neglecting two-phase effects. The results obtained show that the instability is slightly decreased by using the slower chemical kinetics. The results also show that a number of different and often competing phenomena contribute to the instability of the flow. Overall, the large change in chemical kinetics did not have a great effect on the stability of the combustion, although some flow characteristics were greatly changed. This research indicates that there are many contributing factors to combustion instability and the CFD can help in determining which factors are of greatest import for a given combustor.

  11. Progressive Damage Analysis of Laminated Composite (PDALC)-A Computational Model Implemented in the NASA COMET Finite Element Code

    NASA Technical Reports Server (NTRS)

    Lo, David C.; Coats, Timothy W.; Harris, Charles E.; Allen, David H.

    1996-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete list of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occur during the load history. Residual strength predictions made with this information compared favorably with experimental measurements.

  12. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  13. Combined magnetic vector-scalar potential finite element computation of 3D magnetic field and performance of modified Lundell alternators in Space Station applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Ren H.

    1991-01-01

    A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.

  14. Delaunay triangulation and computational fluid dynamics meshes

    NASA Technical Reports Server (NTRS)

    Posenau, Mary-Anne K.; Mount, David M.

    1992-01-01

    In aerospace computational fluid dynamics (CFD) calculations, the Delaunay triangulation of suitable quadrilateral meshes can lead to unsuitable triangulated meshes. Here, we present case studies which illustrate the limitations of using structured grid generation methods which produce points in a curvilinear coordinate system for subsequent triangulations for CFD applications. We discuss conditions under which meshes of quadrilateral elements may not produce a Delaunay triangulation suitable for CFD calculations, particularly with regard to high aspect ratio, skewed quadrilateral elements.

  15. Advanced computer technology - An aspect of the Terminal Configured Vehicle program. [air transportation capacity, productivity, all-weather reliability and noise reduction improvements

    NASA Technical Reports Server (NTRS)

    Berkstresser, B. K.

    1975-01-01

    NASA is conducting a Terminal Configured Vehicle program to provide improvements in the air transportation system such as increased system capacity and productivity, increased all-weather reliability, and reduced noise. A typical jet transport has been equipped with highly flexible digital display and automatic control equipment to study operational techniques for conventional takeoff and landing aircraft. The present airborne computer capability of this aircraft employs a multiple computer simple redundancy concept. The next step is to proceed from this concept to a reconfigurable computer system which can degrade gracefully in the event of a failure, adjust critical computations to remaining capacity, and reorder itself, in the case of transients, to the highest order of redundancy and reliability.

  16. Computer simulation of stress distribution in the metatarsals at different inversion landing angles using the finite element method

    PubMed Central

    Gu, Y. D.; Ren, X. J.; Li, J. S.; Lake, M. J.; Zhang, Q. Y.

    2009-01-01

    Metatarsal fracture is one of the most common foot injuries, particularly in athletes and soldiers, and is often associated with landing in inversion. An improved understanding of deformation of the metatarsals under inversion landing conditions is essential in the diagnosis and prevention of metatarsal injuries. In this work, a detailed three-dimensional (3D) finite element foot model was developed to investigate the effect of inversion positions on stress distribution and concentration within the metatarsals. The predicted plantar pressure distribution showed good agreement with data from controlled biomechanical tests. The deformation and stresses of the metatarsals during landing at different inversion angles (normal landing, 10 degree inversion and 20 degree inversion angles) were comparatively studied. The results showed that in the lateral metatarsals stress increased while in the medial metatarsals stress decreased with the angle of inversion. The peak stress point was found to be near the proximal part of the fifth metatarsal, which corresponds with reported clinical observations of metatarsal injuries. PMID:19685241

  17. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    SciTech Connect

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  18. Properties and reactivity patterns of AsP(3): an experimental and computational study of group 15 elemental molecules.

    PubMed

    Cossairt, Brandi M; Cummins, Christopher C

    2009-10-28

    Facile synthetic access to the isolable, thermally robust AsP(3) molecule has allowed for a thorough study of its physical properties and reaction chemistry with a variety of transition-metal and organic fragments. The electronic properties of AsP(3) in comparison with P(4) are revealed by DFT and atoms in molecules (AIM) approaches and are discussed in relation to the observed electrochemical profiles and the phosphorus NMR properties of the two molecules. An investigation of the nucleus independent chemical shifts revealed that AsP(3) retains spherical aromaticity. The thermodynamic properties of AsP(3) and P(4) are described. The reaction types explored in this study include the thermal decomposition of the AsP(3) tetrahedron to its elements, the synthesis and structural characterization of [(AsP(3))FeCp*(dppe)][BPh(4)] (dppe = 1,2-bis(diphenylphosphino)ethane), 1, selective single As-P bond cleavage reactions, including the synthesis and structural characterization of AsP(3)(P(N((i)Pr)(2))N(SiMe(3))(2))(2), 2, and activations of AsP(3) by reactive early transition-metal fragments including Nb(H)(eta(2)-(t)Bu(H)C horizontal lineNAr)(N[CH(2)(t)Bu]Ar)(2) and Mo(N[(t)Bu]Ar)(3) (Ar = 3,5-Me(2)C(6)H(3)). In the presence of reducing equivalents, AsP(3) was found to allow access to [Na][E(3)Nb(ODipp)(3)] (Dipp = 2,6-diisopropylphenyl) complexes (E = As or P) which themselves allow access to mixtures of As(n)P(4-n) (n = 1-4).

  19. POTHMF: A program for computing potential curves and matrix elements of the coupled adiabatic radial equations for a hydrogen-like atom in a homogeneous magnetic field

    NASA Astrophysics Data System (ADS)

    Chuluunbaatar, O.; Gusev, A. A.; Gerdt, V. P.; Rostovtsev, V. A.; Vinitsky, S. I.; Abrashkevich, A. G.; Kaschiev, M. S.; Serov, V. V.

    2008-02-01

    A FORTRAN 77 program is presented which calculates with the relative machine precision potential curves and matrix elements of the coupled adiabatic radial equations for a hydrogen-like atom in a homogeneous magnetic field. The potential curves are eigenvalues corresponding to the angular oblate spheroidal functions that compose adiabatic basis which depends on the radial variable as a parameter. The matrix elements of radial coupling are integrals in angular variables of the following two types: product of angular functions and the first derivative of angular functions in parameter, and product of the first derivatives of angular functions in parameter, respectively. The program calculates also the angular part of the dipole transition matrix elements (in the length form) expressed as integrals in angular variables involving product of a dipole operator and angular functions. Moreover, the program calculates asymptotic regular and irregular matrix solutions of the coupled adiabatic radial equations at the end of interval in radial variable needed for solving a multi-channel scattering problem by the generalized R-matrix method. Potential curves and radial matrix elements computed by the POTHMF program can be used for solving the bound state and multi-channel scattering problems. As a test desk, the program is applied to the calculation of the energy values, a short-range reaction matrix and corresponding wave functions with the help of the KANTBP program. Benchmark calculations for the known photoionization cross-sections are presented. Program summaryProgram title:POTHMF Catalogue identifier:AEAA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAA_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:8123 No. of bytes in distributed program, including test data

  20. A computer program for the 2-D magnetostatic problem based on integral equations for the field of the conductors and boundary elements

    SciTech Connect

    Morgan, G.H. )

    1992-01-01

    This paper reports on the iterative design of the 2-dimensional cross section of a beam transport magnet having infinitely permeable iron boundaries which requires a fast means of computing the field of the conductors. Solutions in the form of series expansions are used for rectangular iron boundaries, and programs based on the method of images are used to simulate circular iron boundaries. A single procedure or program for dealing with an arbitrary iron boundary would be useful. The present program has been tested with rectangular and circular iron boundaries and provision has been made for the use of other curves. It uses complex contour integral equations for the field of the constant-current density conductors and complex line integrals for the field of the piecewise-linear boundary elements.

  1. NORIA-SP: A finite element computer program for analyzing liquid water transport in porous media; Yucca Mountain Site Characterization Project

    SciTech Connect

    Hopkins, P.L.; Eaton, R.R.; Bixler, N.E.

    1991-12-01

    A family of finite element computer programs has been developed at Sandia National Laboratories (SNL) most recently, NORIA-SP. The original NORIA code solves a total of four transport equations simultaneously: liquid water, water vapor, air, and energy. Consequently, use of NORIA is computer-intensive. Since many of the applications for which NORIA is used are isothermal, we decided to ``strip`` the original four-equation version, leaving only the liquid water equation. This single-phase version is NORIA-SP. The primary intent of this document is to provide the user of NORIA-SP an accurate user`s manual. Consequently, the reader should refer to the NORIA manual if additional detail is required regarding the equation development and finite element methods used. The single-equation version of the NORIA code (NORIA-SP) has been used most frequently for analyzing various hydrological scenarios for the potential underground nuclear waste repository at Yucca Mountain in western Nevada. These analyses are generally performed assuming a composite model to represent the fractured geologic media. In this model the material characteristics of the matrix and the fractures are area weighted to obtain equivalent material properties. Pressure equilibrium between the matrix and fractures is assumed so a single conservation equation can be solved. NORIA-SP is structured to accommodate the composite model. The equations for water velocities in both the rock matrix and the fractures are presented. To use the code for problems involving a single, nonfractured porous material, the user can simply set the area of the fractures to zero.

  2. Composite time-lapse computed tomography and micro finite element simulations: A new imaging approach for characterizing cement flows and mechanical benefits of vertebroplasty.

    PubMed

    Stadelmann, Vincent A; Zderic, Ivan; Baur, Annick; Unholz, Cynthia; Eberli, Ursula; Gueorguiev, Boyko

    2016-02-01

    Vertebroplasty has been shown to reinforce weak vertebral bodies and reduce fracture risks, yet cement leakage is a major problem that can cause severe complications. Since cement flow is nearly impossible to control during surgery, small volumes of cement are injected, but then mechanical benefits might be limited. A better understanding of cement flows within bone structure is required to further optimize vertebroplasty and bone augmentation in general. We developed a novel imaging method, composite time-lapse CT, to characterize cement flow during injection. In brief, composite-resolution time-lapse CT exploits the qualities of microCT and clinical CT. The method consists in overlaying low-resolution time-lapse CT scans acquired during injection onto pre-operative high-resolution microCT scans, generating composite-resolution time-lapse CT series of cement flow within bone. In this in vitro study, composite-resolution time-lapse CT was applied to eight intact and five artificially fractured cadaveric vertebrae during vertebroplasty. The time-lapse scans were acquired at one-milliliter cement injection steps until a total of 10 ml cement was injected. The composite-resolution series were then converted into micro finite element models to compute strains distribution under virtual axial loading. Relocation of strain energy density within bone structure was observed throughout the progression of the procedure. Interestingly, the normalized effect of cement injection on the overall stiffness of the vertebrae was similar between intact and fractured specimens, although at different orders of magnitude. In conclusion, composite time-lapse CT can picture cement flows during bone augmentation. The composite images can also be easily converted into finite element models to compute virtual strain distributions under loading at every step of an injection, providing deeper understanding on the biomechanics of vertebroplasty.

  3. Error Analysis In Computational Elastodynamics

    NASA Astrophysics Data System (ADS)

    Mukherjee, Somenath; Jafarali, P.; Prathap, Gangan

    The Finite Element Method (FEM) is the mathematical tool of the engineers and scientists to determine approximate solutions, in a discretised sense, of the concerned differential equations, which are not always amenable to closed form solutions. In this presentation, the mathematical aspects of this powerful computational tool as applied to the field of elastodynamics have been highlighted, using the first principles of virtual work and energy conservation.

  4. Material Characterization and Geometric Segmentation of a Composite Structure Using Microfocus X-Ray Computed Tomography Image-Based Finite Element Modeling

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Roth, D. J.; Cotton, R.; Studor, George F.; Christiansen, Eric; Young, P. C.

    2011-01-01

    This study utilizes microfocus x-ray computed tomography (CT) slice sets to model and characterize the damage locations and sizes in thermal protection system materials that underwent impact testing. ScanIP/FE software is used to visualize and process the slice sets, followed by mesh generation on the segmented volumetric rendering. Then, the local stress fields around several of the damaged regions are calculated for realistic mission profiles that subject the sample to extreme temperature and other severe environmental conditions. The resulting stress fields are used to quantify damage severity and make an assessment as to whether damage that did not penetrate to the base material can still result in catastrophic failure of the structure. It is expected that this study will demonstrate that finite element modeling based on an accurate three-dimensional rendered model from a series of CT slices is an essential tool to quantify the internal macroscopic defects and damage of a complex system made out of thermal protection material. Results obtained showing details of segmented images; three-dimensional volume-rendered models, finite element meshes generated, and the resulting thermomechanical stress state due to impact loading for the material are presented and discussed. Further, this study is conducted to exhibit certain high-caliber capabilities that the nondestructive evaluation (NDE) group at NASA Glenn Research Center can offer to assist in assessing the structural durability of such highly specialized materials so improvements in their performance and capacities to handle harsh operating conditions can be made.

  5. Acoustic-speed correction of photoacoustic tomography by ultrasonic computed tomography based on optical excitation of elements of a full-ring transducer array

    NASA Astrophysics Data System (ADS)

    Xia, Jun; Huang, Chao; Maslov, Konstantin; Anastasio, Mark A.; Wang, Lihong V.

    2014-03-01

    Photoacoustic computed tomography (PACT) is a hybrid technique that combines optical excitation and ultrasonic detection to provide high resolution images in deep tissues. In the image reconstruction, a constant speed of sound (SOS) is normally assumed. This assumption, however, is often not strictly satisfied in deep tissue imaging, due to acoustic heterogeneities within the object and between the object and coupling medium. If these heterogeneities are not accounted for, they will cause distortions and artifacts in the reconstructed images. In this paper, we incorporated ultrasonic computed tomography (USCT), which measures the SOS distribution within the object, into our full-ring array PACT system. Without the need for ultrasonic transmitting electronics, USCT was performed using the same laser beam as for PACT measurement. By scanning the laser beam on the array surface, we can sequentially fire different elements. As a first demonstration of the system, we studied the effect of acoustic heterogeneities on photoacoustic vascular imaging. We verified that constant SOS is a reasonable approximation when the SOS variation is small. When the variation is large, distortion will be observed in the periphery of the object, especially in the tangential direction.

  6. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  7. Experimental and computational study of trace element distribution between orthopyroxene and anhydrous silicate melt: substitution mechanisms and the effect of iron

    NASA Astrophysics Data System (ADS)

    van Westrenen, W.; van Kan Parker, M.; Liebscher, A.; Frei, D.; van Sijl, J.; Blundy, J.; Franz, G.

    2009-12-01

    Although orthopyroxene (Opx) is present during a wide range of magmatic differentiation processes in the terrestrial and lunar mantle, its effect on melt trace element budgets is not well quantified. We present results of a combined experimental and computational study of trace element partitioning between Opx and anhydrous silicate melts. Experiments were performed in air at atmospheric pressure and temperatures ranging from 1,326 to 1,420 °C in the system CaO-MgO-Al2O3-SiO2 and subsystem CaO-MgO-SiO2. Additional experiments in the Cr2O3-CaO-FeO-MgO-Al2O3-TiO2-SiO2 (CCFMATS) were carried out at elevated pressure ranging from 1.0 to 2.8 GPa and temperatures from 1,430 to 1,600 °C. We provide experimental D’s for a wide range of trace elements (LILE, REE, HFSE and transition metals) for use in petrogenetic modelling. In the CMAS system, REE partition coefficients increase from DLaopx-melt ~0.0005 to DLuopx-melt~0.109, D values for highly charged elements vary from DThopx-melt ~0.0026 through DNbopx-melt~0.0033 and DUopx-melt~0.0066 to DTiopx-melt~0.058, and are all virtually independent of temperature. To elucidate charge-balancing mechanisms for incorporation of REE into Opx, and to assess the possible influence of Fe on Opx-melt partitioning, we compared our experimental results with computer simulations. In these simulations we examine major and minor trace element incorporation into the end-members enstatite (Mg2Si2O6) and ferrosilite (Fe2Si2O6). Calculated solution energies show that R2+ cations are more soluble in Opx than R3+ cations of similar size, consistent with experimental partitioning data. In addition, simulations show charge-balancing of R3+ cations by coupled substitution with Li+ on the M1 site is energetically favoured over coupled substitution involving Al-Si exchange on the tetrahedrally coordinated site. To test these observations we are performing additional experiments at high pressures with identical experimental conditions and starting

  8. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic

  9. Flutter: A finite element program for aerodynamic instability analysis of general shells of revolution with thermal prestress

    NASA Technical Reports Server (NTRS)

    Fallon, D. J.; Thornton, E. A.

    1983-01-01

    Documentation for the computer program FLUTTER is presented. The theory of aerodynamic instability with thermal prestress is discussed. Theoretical aspects of the finite element matrices required in the aerodynamic instability analysis are also discussed. General organization of the computer program is explained, and instructions are then presented for the execution of the program.

  10. Synthesis, spectroscopic, cytotoxic aspects and computational study of N-(pyridine-2-ylmethylene)benzo[d]thiazol-2-amine Schiff base and some of its transition metal complexes

    NASA Astrophysics Data System (ADS)

    Abd El-Aziz, Dina M.; Etaiw, Safaa Eldin H.; Ali, Elham A.

    2013-09-01

    N-(pyridine-2-ylmethylene)benzo[d]thiazol-2-amine Schiff base (L) and its Cu(II), Fe(III), Co(II), Ni(II) and Zn(II) complexes were synthesized and characterized by a set of chemical and spectroscopic measurements using elemental analysis, electrical conductance, mass spectra, magnetic susceptibility and spectral techniques (IR, UV-Vis, 1H NMR). Elemental and mass spectrometric data are consistent with the proposed formula. IR spectra confirm the bidentate nature of the Schiff base ligand. The octahedral geometry around Cu(II), Fe(III), Ni(II) and Zn(II) as well as tetrahedral geometry around Co(II) were suggested by UV-Vis spectra and magnetic moment data. The thermal degradation behavior of the Schiff base and its complexes was investigated by thermogravimetric analysis. The structure of the Schiff base and its transition metal complexes was also theoretically studied using molecular mechanics (MM+). The obtained structures were minimized with a semi-empirical (PM3) method. The in vitro antitumor activity of the synthesized compounds was studied. The Zn-complex exhibits significant decrease in surviving fraction of breast carcinoma (MCF 7), liver carcinoma (HEPG2), colon carcinoma (HCT116) and larynx carcinoma (HEP2) cell lines human cancer.

  11. Regulatory aspects

    NASA Astrophysics Data System (ADS)

    Stern, Arthur M.

    1986-07-01

    At this time, there is no US legislation that is specifically aimed at regulating the environmental release of genetically engineered organisms or their modified components, either during the research and development stage or during application. There are some statutes, administered by several federal agencies, whose language is broad enough to allow the extension of intended coverage to include certain aspects of biotechnology. The one possible exception is FIFRA, which has already brought about the registration of several natural microbial pesticides but which also has provision for requiring the registration of “strain improved” microbial pesticides. Nevertheless, there may be gaps in coverage even if all pertinent statutes were to be actively applied to the control of environmental release of genetically modified substances. The decision to regulate biotechnology under TSCA was justified, in part, on the basis of its intended role as a gap-filling piece of environmental legislation. The advantage of regulating biotechnology under TSCA is that this statute, unlike others, is concerned with all media of exposure (air, water, soil, sediment, biota) that may pose health and environmental hazards. Experience may show that extending existing legislation to regulate biotechnology is a poor compromise compared to the promulgation of new legislation specifically designed for this purpose. It appears that many other countries are ultimately going to take the latter course to regulate biotechnology.

  12. Unravelling Mechanistic Aspects of the Gas-Phase Ethanol Conversion: An Experimental and Computational Study on the Thermal Reactions of MO2 (+) (M=Mo, W) with Ethanol.

    PubMed

    González-Navarrete, Patricio; Schlangen, Maria; Wu, Xiao-Nan; Schwarz, Helmut

    2016-02-24

    The ion/molecule reactions of molybdenum and tungsten dioxide cations with ethanol have been studied by Fourier transform ion-cyclotron resonance mass spectrometry (FT-ICR MS) and density functional theory (DFT) calculations. Dehydration of ethanol has been found as the dominant reaction channel, while generation of the ethyl cation corresponds to a minor product. Cleary, the reactions are mainly governed by the Lewis acidity of the metal center. Computational results, together with isotopic labeling experiments, show that the dehydration of ethanol can proceed either through a conventional concerted [1,2]-elimination mechanism or a step-wise process; the latter occurs via a hydroxyethoxy intermediate. Formation of C2 H5 (+) takes place by transfer of OH(-) from ethanol to the metal center of MO2 (+) . The molybdenum and tungsten dioxide cations exhibit comparable reactivities toward ethanol, and this is reflected in similar reaction rate constants and branching ratios. PMID:26834042

  13. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  14. Computer Lab Configuration.

    ERIC Educational Resources Information Center

    Wodarz, Nan

    2003-01-01

    Describes the layout and elements of an effective school computer lab. Includes configuration, storage spaces, cabling and electrical requirements, lighting, furniture, and computer hardware and peripherals. (PKP)

  15. Curved Beam Computed Tomography based Structural Rigidity Analysis of Bones with Simulated Lytic Defect: A Comparative Study with Finite Element Analysis

    PubMed Central

    Oftadeh, R.; Karimi, Z.; Villa-Camacho, J.; Tanck, E.; Verdonschot, N.; Goebel, R.; Snyder, B. D.; Hashemi, H. N.; Vaziri, A.; Nazarian, A.

    2016-01-01

    In this paper, a CT based structural rigidity analysis (CTRA) method that incorporates bone intrinsic local curvature is introduced to assess the compressive failure load of human femur with simulated lytic defects. The proposed CTRA is based on a three dimensional curved beam theory to obtain critical stresses within the human femur model. To test the proposed method, ten human cadaveric femurs with and without simulated defects were mechanically tested under axial compression to failure. Quantitative computed tomography images were acquired from the samples, and CTRA and finite element analysis were performed to obtain the failure load as well as rigidities in both straight and curved cross sections. Experimental results were compared to the results obtained from FEA and CTRA. The failure loads predicated by curved beam CTRA and FEA are in agreement with experimental results. The results also show that the proposed method is an efficient and reliable method to find both the location and magnitude of failure load. Moreover, the results show that the proposed curved CTRA outperforms the regular straight beam CTRA, which ignores the bone intrinsic curvature and can be used as a useful tool in clinical practices. PMID:27585495

  16. Curved Beam Computed Tomography based Structural Rigidity Analysis of Bones with Simulated Lytic Defect: A Comparative Study with Finite Element Analysis.

    PubMed

    Oftadeh, R; Karimi, Z; Villa-Camacho, J; Tanck, E; Verdonschot, N; Goebel, R; Snyder, B D; Hashemi, H N; Vaziri, A; Nazarian, A

    2016-01-01

    In this paper, a CT based structural rigidity analysis (CTRA) method that incorporates bone intrinsic local curvature is introduced to assess the compressive failure load of human femur with simulated lytic defects. The proposed CTRA is based on a three dimensional curved beam theory to obtain critical stresses within the human femur model. To test the proposed method, ten human cadaveric femurs with and without simulated defects were mechanically tested under axial compression to failure. Quantitative computed tomography images were acquired from the samples, and CTRA and finite element analysis were performed to obtain the failure load as well as rigidities in both straight and curved cross sections. Experimental results were compared to the results obtained from FEA and CTRA. The failure loads predicated by curved beam CTRA and FEA are in agreement with experimental results. The results also show that the proposed method is an efficient and reliable method to find both the location and magnitude of failure load. Moreover, the results show that the proposed curved CTRA outperforms the regular straight beam CTRA, which ignores the bone intrinsic curvature and can be used as a useful tool in clinical practices. PMID:27585495

  17. VIBA-Lab 3.0: Computer program for simulation and semi-quantitative analysis of PIXE and RBS spectra and 2D elemental maps

    NASA Astrophysics Data System (ADS)

    Orlić, Ivica; Mekterović, Darko; Mekterović, Igor; Ivošević, Tatjana

    2015-11-01

    VIBA-Lab is a computer program originally developed by the author and co-workers at the National University of Singapore (NUS) as an interactive software package for simulation of Particle Induced X-ray Emission and Rutherford Backscattering Spectra. The original program is redeveloped to a VIBA-Lab 3.0 in which the user can perform semi-quantitative analysis by comparing simulated and measured spectra as well as simulate 2D elemental maps for a given 3D sample composition. The latest version has a new and more versatile user interface. It also has the latest data set of fundamental parameters such as Coster-Kronig transition rates, fluorescence yields, mass absorption coefficients and ionization cross sections for K and L lines in a wider energy range than the original program. Our short-term plan is to introduce routine for quantitative analysis for multiple PIXE and XRF excitations. VIBA-Lab is an excellent teaching tool for students and researchers in using PIXE and RBS techniques. At the same time the program helps when planning an experiment and when optimizing experimental parameters such as incident ions, their energy, detector specifications, filters, geometry, etc. By "running" a virtual experiment the user can test various scenarios until the optimal PIXE and BS spectra are obtained and in this way save a lot of expensive machine time.

  18. Three-dimensional finite element analysis of unilateral mastication in malocclusion cases using cone-beam computed tomography and a motion capture system

    PubMed Central

    2016-01-01

    Purpose Stress distribution and mandible distortion during lateral movements are known to be closely linked to bruxism, dental implant placement, and temporomandibular joint disorder. The present study was performed to determine stress distribution and distortion patterns of the mandible during lateral movements in Class I, II, and III relationships. Methods Five Korean volunteers (one normal, two Class II, and two Class III occlusion cases) were selected. Finite element (FE) modeling was performed using information from cone-beam computed tomographic (CBCT) scans of the subjects’ skulls, scanned images of dental casts, and incisor movement captured by an optical motion-capture system. Results In the Class I and II cases, maximum stress load occurred at the condyle of the balancing side, but, in the Class III cases, the maximum stress was loaded on the condyle of the working side. Maximum distortion was observed on the menton at the midline in every case, regardless of loading force. The distortion was greatest in Class III cases and smallest in Class II cases. Conclusions The stress distribution along and accompanying distortion of a mandible seems to be affected by the anteroposterior position of the mandible. Additionally, 3-D modeling of the craniofacial skeleton using CBCT and an optical laser scanner and reproduction of mandibular movement by way of the optical motion-capture technique used in this study are reliable techniques for investigating the masticatory system. PMID:27127690

  19. Investigation and optimization of a finite element simulation of transducer array systems for 3D ultrasound computer tomography with respect to electrical impedance characteristics

    NASA Astrophysics Data System (ADS)

    Kohout, B.; Pirinen, J.; Ruiter, N. V.

    2012-03-01

    The established standard screening method to detect breast cancer is X-ray mammography. However X-ray mammography often has low contrast for tumors located within glandular tissue. A new approach is 3D Ultrasound Computer Tomography (USCT), which is expected to detect small tumors at an early stage. This paper describes the development, improvement and the results of Finite Element Method (FEM) simulations of the Transducer Array System (TAS) used in our 3D USCT. The focus of this work is on researching the influence of meshing and material parameters on the electrical impedance curves. Thereafter, these findings are used to optimize the simulation model. The quality of the simulation was evaluated by comparing simulated impedance characteristics with measured data of the real TAS. The resulting FEM simulation model is a powerful tool to analyze and optimize transducer array systems applied for USCT. With this simulation model, the behavior of TAS for different geometry modifications was researched. It provides a means to understand the acoustical performances inside of any ultrasound transducer represented by its electrical impedance characteristic.

  20. Curved Beam Computed Tomography based Structural Rigidity Analysis of Bones with Simulated Lytic Defect: A Comparative Study with Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Oftadeh, R.; Karimi, Z.; Villa-Camacho, J.; Tanck, E.; Verdonschot, N.; Goebel, R.; Snyder, B. D.; Hashemi, H. N.; Vaziri, A.; Nazarian, A.

    2016-09-01

    In this paper, a CT based structural rigidity analysis (CTRA) method that incorporates bone intrinsic local curvature is introduced to assess the compressive failure load of human femur with simulated lytic defects. The proposed CTRA is based on a three dimensional curved beam theory to obtain critical stresses within the human femur model. To test the proposed method, ten human cadaveric femurs with and without simulated defects were mechanically tested under axial compression to failure. Quantitative computed tomography images were acquired from the samples, and CTRA and finite element analysis were performed to obtain the failure load as well as rigidities in both straight and curved cross sections. Experimental results were compared to the results obtained from FEA and CTRA. The failure loads predicated by curved beam CTRA and FEA are in agreement with experimental results. The results also show that the proposed method is an efficient and reliable method to find both the location and magnitude of failure load. Moreover, the results show that the proposed curved CTRA outperforms the regular straight beam CTRA, which ignores the bone intrinsic curvature and can be used as a useful tool in clinical practices.

  1. Revolution in Orthodontics: Finite element analysis

    PubMed Central

    Singh, Johar Rajvinder; Kambalyal, Prabhuraj; Jain, Megha; Khandelwal, Piyush

    2016-01-01

    Engineering has not only developed in the field of medicine but has also become quite established in the field of dentistry, especially Orthodontics. Finite element analysis (FEA) is a computational procedure to calculate the stress in an element, which performs a model solution. This structural analysis allows the determination of stress resulting from external force, pressure, thermal change, and other factors. This method is extremely useful for indicating mechanical aspects of biomaterials and human tissues that can hardly be measured in vivo. The results obtained can then be studied using visualization software within the finite element method (FEM) to view a variety of parameters, and to fully identify implications of the analysis. This is a review to show the applications of FEM in Orthodontics. It is extremely important to verify what the purpose of the study is in order to correctly apply FEM. PMID:27114948

  2. The individual element test revisited

    NASA Technical Reports Server (NTRS)

    Militello, Carmelo; Felippa, Carlos A.

    1991-01-01

    The subject of the patch test for finite elements retains several unsettled aspects. In particular, the issue of one-element versus multielement tests needs clarification. Following a brief historical review, we present the individual element test (IET) of Bergan and Hanssen in an expanded context that encompasses several important classes of new elements. The relationship of the IET to the multielement forms A, B, and C of the patch test and to the single element test are clarified.

  3. Aspects of Plant Intelligence

    PubMed Central

    TREWAVAS, ANTHONY

    2003-01-01

    Intelligence is not a term commonly used when plants are discussed. However, I believe that this is an omission based not on a true assessment of the ability of plants to compute complex aspects of their environment, but solely a reflection of a sessile lifestyle. This article, which is admittedly controversial, attempts to raise many issues that surround this area. To commence use of the term intelligence with regard to plant behaviour will lead to a better understanding of the complexity of plant signal transduction and the discrimination and sensitivity with which plants construct images of their environment, and raises critical questions concerning how plants compute responses at the whole‐plant level. Approaches to investigating learning and memory in plants will also be considered. PMID:12740212

  4. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  5. Discordance between Prevalent Vertebral Fracture and Vertebral Strength Estimated by the Finite Element Method Based on Quantitative Computed Tomography in Patients with Type 2 Diabetes Mellitus

    PubMed Central

    2015-01-01

    Background Bone fragility is increased in patients with type 2 diabetes mellitus (T2DM), but a useful method to estimate bone fragility in T2DM patients is lacking because bone mineral density alone is not sufficient to assess the risk of fracture. This study investigated the association between prevalent vertebral fractures (VFs) and the vertebral strength index estimated by the quantitative computed tomography-based nonlinear finite element method (QCT-based nonlinear FEM) using multi-detector computed tomography (MDCT) for clinical practice use. Research Design and Methods A cross-sectional observational study was conducted on 54 postmenopausal women and 92 men over 50 years of age, all of whom had T2DM. The vertebral strength index was compared in patients with and without VFs confirmed by spinal radiographs. A standard FEM procedure was performed with the application of known parameters for the bone material properties obtained from nondiabetic subjects. Results A total of 20 women (37.0%) and 39 men (42.4%) with VFs were identified. The vertebral strength index was significantly higher in the men than in the women (P<0.01). Multiple regression analysis demonstrated that the vertebral strength index was significantly and positively correlated with the spinal bone mineral density (BMD) and inversely associated with age in both genders. There were no significant differences in the parameters, including the vertebral strength index, between patients with and without VFs. Logistic regression analysis adjusted for age, spine BMD, BMI, HbA1c, and duration of T2DM did not indicate a significant relationship between the vertebral strength index and the presence of VFs. Conclusion The vertebral strength index calculated by QCT-based nonlinear FEM using material property parameters obtained from nondiabetic subjects, whose risk of fracture is lower than that of T2DM patients, was not significantly associated with bone fragility in patients with T2DM. This discordance

  6. A 3-D finite-element model for computation of temperature profiles and regions of thermal damage during focused ultrasound surgery exposures.

    PubMed

    Meaney, P M; Clarke, R L; ter Haar, G R; Rivens, I H

    1998-11-01

    Although there have been numerous models implemented for modeling thermal diffusion effects during focused ultrasound surgery (FUS), most have limited themselves to representing simple situations for which analytical solutions and the use of cylindrical geometries sufficed. For modeling single lesion formation and the heating patterns from a single exposure, good results were achieved in comparison with experimental results for predicting lesion size, shape and location. However, these types of approaches are insufficient when considering the heating of multiple sites with FUS exposures when the time interval between exposures is short. In such cases, the heat dissipation patterns from initial exposures in the lesion array formation can play a significant role in the heating patterns for later exposures. Understanding the effects of adjacent lesion formation, such as this, requires a three-dimensional (3-D) representation of the bioheat equation. Thus, we have developed a 3-D finite-element representation for modeling the thermal diffusion effects during FUS exposures in clinically relevant tissue volumes. The strength of this approach over past methods is its ability to represent arbitrarily shaped 3-D situations. Initial simulations have allowed calculation of the temperature distribution as a function of time for adjacent FUS exposures in excised bovine liver, with the individually computed point temperatures comparing favorably with published measurements. In addition to modeling these temperature distributions, the model was implemented in conjunction with an algorithm for calculating the thermal dose as a way of predicting lesion shape. Although used extensively in conventional hyperthermia applications, this thermal dose criterion has only been applied in a limited number of simulations in FUS for comparison with experimental measurements. In this study, simulations were run for focal depths 2 and 3 cm below the surface of pig's liver, using multiple

  7. Exercises in Molecular Computing

    PubMed Central

    2014-01-01

    Conspectus The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word “computer” now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem–loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is

  8. Connectivity Measures in EEG Microstructural Sleep Elements

    PubMed Central

    Sakellariou, Dimitris; Koupparis, Andreas M.; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K.

    2016-01-01

    During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an “EEG-element connectivity” methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the

  9. Computational and evolutionary aspects of language

    NASA Astrophysics Data System (ADS)

    Nowak, Martin A.; Komarova, Natalia L.; Niyogi, Partha

    2002-06-01

    Language is our legacy. It is the main evolutionary contribution of humans, and perhaps the most interesting trait that has emerged in the past 500 million years. Understanding how darwinian evolution gives rise to human language requires the integration of formal language theory, learning theory and evolutionary dynamics. Formal language theory provides a mathematical description of language and grammar. Learning theory formalizes the task of language acquisition-it can be shown that no procedure can learn an unrestricted set of languages. Universal grammar specifies the restricted set of languages learnable by the human brain. Evolutionary dynamics can be formulated to describe the cultural evolution of language and the biological evolution of universal grammar.

  10. Computational and evolutionary aspects of language.

    PubMed

    Nowak, Martin A; Komarova, Natalia L; Niyogi, Partha

    2002-06-01

    Language is our legacy. It is the main evolutionary contribution of humans, and perhaps the most interesting trait that has emerged in the past 500 million years. Understanding how darwinian evolution gives rise to human language requires the integration of formal language theory, learning theory and evolutionary dynamics. Formal language theory provides a mathematical description of language and grammar. Learning theory formalizes the task of language acquisition it can be shown that no procedure can learn an unrestricted set of languages. Universal grammar specifies the restricted set of languages learnable by the human brain. Evolutionary dynamics can be formulated to describe the cultural evolution of language and the biological evolution of universal grammar.

  11. Chemistry of superheavy elements.

    PubMed

    Schädel, Matthias

    2006-01-01

    The number of chemical elements has increased considerably in the last few decades. Most excitingly, these heaviest, man-made elements at the far-end of the Periodic Table are located in the area of the long-awaited superheavy elements. While physical techniques currently play a leading role in these discoveries, the chemistry of superheavy elements is now beginning to be developed. Advanced and very sensitive techniques allow the chemical properties of these elusive elements to be probed. Often, less than ten short-lived atoms, chemically separated one-atom-at-a-time, provide crucial information on basic chemical properties. These results place the architecture of the far-end of the Periodic Table on the test bench and probe the increasingly strong relativistic effects that influence the chemical properties there. This review is focused mainly on the experimental work on superheavy element chemistry. It contains a short contribution on relativistic theory, and some important historical and nuclear aspects.

  12. (Environmental and geophysical modeling, fracture mechanics, and boundary element methods)

    SciTech Connect

    Gray, L.J.

    1990-11-09

    Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.

  13. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  14. Aspects, Wrappers and Events

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.

    2003-01-01

    This viewgraph presentation provides information on Object Infrastructure Framework (OIF), an Aspect-Oriented Programming (AOP) system. The presentation begins with an introduction to the difficulties and requirements of distributed computing, including functional and non-functional requirements (ilities). The architecture of Distributed Object Technology includes stubs, proxies for implementation objects, and skeletons, proxies for client applications. The key OIF ideas (injecting behavior, annotated communications, thread contexts, and pragma) are discussed. OIF is an AOP mechanism; AOP is centered on: 1) Separate expression of crosscutting concerns; 2) Mechanisms to weave the separate expressions into a unified system. AOP is software engineering technology for separately expressing systematic properties while nevertheless producing running systems that embody these properties.

  15. Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Thomas, James

    2008-01-01

    Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications

  16. On the influence of the surface and body tides on the motion of a satellite. [earth geophysical aspects of orbit perturbations

    NASA Technical Reports Server (NTRS)

    Musen, P.

    1973-01-01

    Some geophysical aspects of the tidal perturbations in the motion of artificial satellites are investigated and a system of formulas is developed that is convenient for computation of the tidal effects in the elements using a step-by-step numerical integration.

  17. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  18. Aspect-Oriented Subprogram Synthesizes UML Sequence Diagrams

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2006-01-01

    The Rational Sequence computer program described elsewhere includes a subprogram that utilizes the capability for aspect-oriented programming when that capability is present. This subprogram is denoted the Rational Sequence (AspectJ) component because it uses AspectJ, which is an extension of the Java programming language that introduces aspect-oriented programming techniques into the language

  19. Identification of a short interspersed repetitive element in partially spliced transcripts of the bell pepper (Capsicum annuum) PAP gene: new evolutionary and regulatory aspects on plant tRNA-related SINEs.

    PubMed

    Pozueta-Romero, J; Houlné, G; Schantz, R

    1998-07-01

    In bell pepper, a gene encoding a major plastid-lipid associated protein is expressed as both partially and totally spliced transcripts (respectively PAP2 and PAP1). Although PAP is present as a single-copy gene in the bell pepper genome, Southern blots using PAP2 as a probe revealed multiple homologous copies. Analyses of the intronic sequence of PAP2 showed the existence of a 206bp short interspersed repetitive element (SINE) belonging to the Ts family of retrotransposons (Yoshioka et al., 1993). Comparison with PAP sequences in other Solanaceae species suggested that the structure of the gene is highly conserved: the two introns are inserted at the same position. However, the Ts insertion found in bell pepper is absent in tobacco and tomato. Studies using RT-PCR showed that in these latter species only totally spliced transcripts of PAP are present. On the other hand, RNA analyses of tobacco plants transformed with the bell pepper PAP revealed the presence of both totally and incompletely spliced transcripts. Altogether our results support the hypothesis that the Ts insertion into the first intron of PAP results in a splicing defect of the corresponding pre-mRNA. Based on the presence of peculiar, previously unidentified Ts elements, a possible horizontal transmission of Ts elements from animals to plants is discussed.

  20. Computational mechanics - Advances and trends; Proceedings of the Session - Future directions of Computational Mechanics of the ASME Winter Annual Meeting, Anaheim, CA, Dec. 7-12, 1986

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor)

    1986-01-01

    The papers contained in this volume provide an overview of the advances made in a number of aspects of computational mechanics, identify some of the anticipated industry needs in this area, discuss the opportunities provided by new hardware and parallel algorithms, and outline some of the current government programs in computational mechanics. Papers are included on advances and trends in parallel algorithms, supercomputers for engineering analysis, material modeling in nonlinear finite-element analysis, the Navier-Stokes computer, and future finite-element software systems.

  1. Computer Conferences: Success or Failure?

    ERIC Educational Resources Information Center

    Phillips, Amy Friedman

    This examination of the aspects of computers and computer conferencing that can lead to their successful design and utilization focuses on task-related functions and emotional interactions in human communication and human-computer interactions. Such aspects of computer conferences as procedures, problems, advantages, and suggestions for future…

  2. It's elemental

    NASA Astrophysics Data System (ADS)

    The Periodic Table of the elements will now have to be updated. An international team of researchers has added element 110 to the Earth's armory of elements. Though short-lived—of the order of microseconds, element 110 bottoms out the list as the heaviest known element on the planet. Scientists at the Heavy Ion Research Center in Darmstadt, Germany, made the 110-proton element by colliding a lead isotope with nickel atoms. The element, which is yet to be named, has an atomic mass of 269.

  3. Cohesive Elements for Shells

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; Turon, Albert

    2007-01-01

    A cohesive element for shell analysis is presented. The element can be used to simulate the initiation and growth of delaminations between stacked, non-coincident layers of shell elements. The procedure to construct the element accounts for the thickness offset by applying the kinematic relations of shell deformation to transform the stiffness and internal force of a zero-thickness cohesive element such that interfacial continuity between the layers is enforced. The procedure is demonstrated by simulating the response and failure of the Mixed Mode Bending test and a skin-stiffener debond specimen. In addition, it is shown that stacks of shell elements can be used to create effective models to predict the inplane and delamination failure modes of thick components. The results indicate that simple shell models can retain many of the necessary predictive attributes of much more complex 3D models while providing the computational efficiency that is necessary for design.

  4. Defining Elemental Imitation Mechanisms: A Comparison of Cognitive and Motor-Spatial Imitation Learning across Object- and Computer-Based Tasks

    ERIC Educational Resources Information Center

    Subiaul, Francys; Zimmermann, Laura; Renner, Elizabeth; Schilder, Brian; Barr, Rachel

    2016-01-01

    During the first 5 years of life, the versatility, breadth, and fidelity with which children imitate change dramatically. Currently, there is no model to explain what underlies such significant changes. To that end, the present study examined whether task-independent but domain-specific--elemental--imitation mechanism explains performance across…

  5. French Computer Terminology.

    ERIC Educational Resources Information Center

    Gray, Eugene F.

    1985-01-01

    Characteristics, idiosyncrasies, borrowings, and other aspects of the French terminology for computers and computer-related matters are discussed and placed in the context of French computer use. A glossary provides French equivalent terms or translations of English computer terminology. (MSE)

  6. Legal aspects of satellite teleconferencing

    NASA Technical Reports Server (NTRS)

    Smith, D. D.

    1971-01-01

    The application of satellite communications for teleconferencing purposes is discussed. The legal framework within which such a system or series of systems could be developed is considered. The analysis is based on: (1) satellite teleconferencing regulation, (2) the options available for such a system, (3) regulatory alternatives, and (4) ownership and management aspects. The system is designed to provide a capability for professional education, remote medical diagnosis, business conferences, and computer techniques.

  7. Proceedings of transuranium elements

    SciTech Connect

    Not Available

    1992-01-01

    The identification of the first synthetic elements was established by chemical evidence. Conclusive proof of the synthesis of the first artificial element, technetium, was published in 1937 by Perrier and Segre. An essential aspect of their achievement was the prediction of the chemical properties of element 43, which had been missing from the periodic table and which was expected to have properties similar to those of manganese and rhenium. The discovery of other artificial elements, astatine and francium, was facilitated in 1939-1940 by the prediction of their chemical properties. A little more than 50 years ago, in the spring of 1940, Edwin McMillan and Philip Abelson synthesized element 93, neptunium, and confirmed its uniqueness by chemical means. On August 30, 1940, Glenn Seaborg, Arthur Wahl, and the late Joseph Kennedy began their neutron irradiations of uranium nitrate hexahydrate. A few months later they synthesized element 94, later named plutonium, by observing the alpha particles emitted from uranium oxide targets that had been bombarded with deuterons. Shortly thereafter they proved that is was the second transuranium element by establishing its unique oxidation-reduction behavior. The symposium honored the scientists and engineers whose vision and dedication led to the discovery of the transuranium elements and to the understanding of the influence of 5f electrons on their electronic structure and bonding. This volume represents a record of papers presented at the symposium.

  8. JAC2D: A two-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

    SciTech Connect

    Biffle, J.H.; Blanford, M.L.

    1994-05-01

    JAC2D is a two-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equations. The method is implemented in a two-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. A four-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic/plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

  9. JAC3D -- A three-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

    SciTech Connect

    Biffle, J.H.

    1993-02-01

    JAC3D is a three-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equation. The method is implemented in a three-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. An eight-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic-plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

  10. Verification and benchmarking of MAGNUM-2D: a finite element computer code for flow and heat transfer in fractured porous media

    SciTech Connect

    Eyler, L.L.; Budden, M.J.

    1985-03-01

    The objective of this work is to assess prediction capabilities and features of the MAGNUM-2D computer code in relation to its intended use in the Basalt Waste Isolation Project (BWIP). This objective is accomplished through a code verification and benchmarking task. Results are documented which support correctness of prediction capabilities in areas of intended model application. 10 references, 43 figures, 11 tables.

  11. Cohesive Zone Model User Element

    2007-04-17

    Cohesive Zone Model User Element (CZM UEL) is an implementation of a Cohesive Zone Model as an element for use in finite element simulations. CZM UEL computes a nodal force vector and stiffness matrix from a vector of nodal displacements. It is designed for structural analysts using finite element software to predict crack initiation, crack propagation, and the effect of a crack on the rest of a structure.

  12. Measuring Aspects of Morality

    ERIC Educational Resources Information Center

    Ziv, Avner

    1976-01-01

    A group test measuring five aspects of morality in children is presented. The aspects are: resistance to temptation, stage of moral judgment, confession after transgression, reaction of fear or guilt, and severity of punishment for transgression. (Editor)

  13. Elemental ZOO

    NASA Astrophysics Data System (ADS)

    Helser, Terry L.

    2003-04-01

    This puzzle uses the symbols of 39 elements to spell the names of 25 animals found in zoos. Underlined spaces and the names of the elements serve as clues. To solve the puzzle, students must find the symbols that correspond to the elemental names and rearrange them into the animals' names.

  14. Injector element characterization methodology

    NASA Technical Reports Server (NTRS)

    Cox, George B., Jr.

    1988-01-01

    Characterization of liquid rocket engine injector elements is an important part of the development process for rocket engine combustion devices. Modern nonintrusive instrumentation for flow velocity and spray droplet size measurement, and automated, computer-controlled test facilities allow rapid, low-cost evaluation of injector element performance and behavior. Application of these methods in rocket engine development, paralleling their use in gas turbine engine development, will reduce rocket engine development cost and risk. The Alternate Turbopump (ATP) Hot Gas Systems (HGS) preburner injector elements were characterized using such methods, and the methodology and some of the results obtained will be shown.

  15. From Finite Element Meshes to Clouds of Points: A Review of Methods for Generation of Computational Biomechanics Models for Patient-Specific Applications.

    PubMed

    Wittek, Adam; Grosland, Nicole M; Joldes, Grand Roman; Magnotta, Vincent; Miller, Karol

    2016-01-01

    It has been envisaged that advances in computing and engineering technologies could extend surgeons' ability to plan and carry out surgical interventions more accurately and with less trauma. The progress in this area depends crucially on the ability to create robustly and rapidly patient-specific biomechanical models. We focus on methods for generation of patient-specific computational grids used for solving partial differential equations governing the mechanics of the body organs. We review state-of-the-art in this area and provide suggestions for future research. To provide a complete picture of the field of patient-specific model generation, we also discuss methods for identifying and assigning patient-specific material properties of tissues and boundary conditions.

  16. From Finite Element Meshes to Clouds of Points: A Review of Methods for Generation of Computational Biomechanics Models for Patient-Specific Applications.

    PubMed

    Wittek, Adam; Grosland, Nicole M; Joldes, Grand Roman; Magnotta, Vincent; Miller, Karol

    2016-01-01

    It has been envisaged that advances in computing and engineering technologies could extend surgeons' ability to plan and carry out surgical interventions more accurately and with less trauma. The progress in this area depends crucially on the ability to create robustly and rapidly patient-specific biomechanical models. We focus on methods for generation of patient-specific computational grids used for solving partial differential equations governing the mechanics of the body organs. We review state-of-the-art in this area and provide suggestions for future research. To provide a complete picture of the field of patient-specific model generation, we also discuss methods for identifying and assigning patient-specific material properties of tissues and boundary conditions. PMID:26424475

  17. NON-CONFORMING FINITE ELEMENTS; MESH GENERATION, ADAPTIVITY AND RELATED ALGEBRAIC MULTIGRID AND DOMAIN DECOMPOSITION METHODS IN MASSIVELY PARALLEL COMPUTING ENVIRONMENT

    SciTech Connect

    Lazarov, R; Pasciak, J; Jones, J

    2002-02-01

    Construction, analysis and numerical testing of efficient solution techniques for solving elliptic PDEs that allow for parallel implementation have been the focus of the research. A number of discretization and solution methods for solving second order elliptic problems that include mortar and penalty approximations and domain decomposition methods for finite elements and finite volumes have been investigated and analyzed. Techniques for parallel domain decomposition algorithms in the framework of PETC and HYPRE have been studied and tested. Hierarchical parallel grid refinement and adaptive solution methods have been implemented and tested on various model problems. A parallel code implementing the mortar method with algebraically constructed multiplier spaces was developed.

  18. ALGEBRA: a computer program that algebraically manipulates finite element output data. [In extended FORTRAN for CDC 7600 or CYBER 76 only

    SciTech Connect

    Richgels, M A; Biffle, J H

    1980-09-01

    ALGEBRA is a program that allows the user to process output data from finite-element analysis codes before they are sent to plotting routines. These data take the form of variable values (stress, strain, and velocity components, etc.) on a tape that is both the output tape from the analyses code and the input tape to ALGEBRA. The ALGEBRA code evaluates functions of these data and writes the function values on an output tape that can be used as input to plotting routines. Convenient input format and error detection capabilities aid the user in providing ALGEBRA with the functions to be evaluated. 1 figure.

  19. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  20. Mobile genetic elements: in silico, in vitro, in vivo.

    PubMed

    Arkhipova, Irina R; Rice, Phoebe A

    2016-03-01

    Mobile genetic elements (MGEs), also called transposable elements (TEs), represent universal components of most genomes and are intimately involved in nearly all aspects of genome organization, function and evolution. However, there is currently a gap between the fast pace of TE discovery in silico, driven by the exponential growth of comparative genomic studies, and a limited number of experimental models amenable to more traditional in vitro and in vivo studies of structural, mechanistic and regulatory properties of diverse MGEs. Experimental and computational scientists came together to bridge this gap at a recent conference, 'Mobile Genetic Elements: in silico, in vitro, in vivo', held at the Marine Biological Laboratory (MBL) in Woods Hole, MA, USA.

  1. Effect of attachment types and number of implants supporting mandibular overdentures on stress distribution: a computed tomography-based 3D finite element analysis.

    PubMed

    Arat Bilhan, Selda; Baykasoglu, Cengiz; Bilhan, Hakan; Kutay, Omer; Mugan, Ata

    2015-01-01

    The objective of this study was to calculate stresses in bone tissue surrounding uncoupled and splinted implants that are induced by a bite force applied to the mandible and to determine whether the number of mandibular overdenture supporting implants in mandibular bone influence the stress distribution. A human adult edentulous mandible retrieved from a formalin fixed cadaver was used to define the geometry of finite element (FE) model and the FE model was verified with experimental measurements. Following the FE model validation, three different biting situations were simulated for the 2-, 3- and 4-implant retentive anchor as well as bar attachment overdentures under vertical loading of 100 N. As a result of the analyses, it was concluded that an increment in implant number and the splinted attachment type tended to cause lower stresses and the use of two single attachments seems to be a safe and sufficient solution for the treatment of mandibular edentulism with overdentures.

  2. Directivity and spacing for the antenna elements

    NASA Technical Reports Server (NTRS)

    Koshy, V. K.

    1983-01-01

    The optimum design choice for the MST radar antenna was considered. The following factors are required: directivity and gain; beam width and its symmetry; sidelobe levels, near and wide angle; impedance matching; feeder network losses; polarization diversity; steerability; cost effectiveness; and maintainability. The directivity and related beam forming aspects of various antenna elements and directivity aspects when such elements are formed into an array are discussed. Array performance for important variables, in particular, the spacing of the elements is considered.

  3. Trace element emissions

    SciTech Connect

    Benson, S.A.; Erickson, T.A.; Steadman, E.N.; Zygarlicke, C.J.; Hauserman, W.B.; Hassett, D.J.

    1994-10-01

    The Energy & Environmental Research Center (EERC) is carrying out an investigation that will provide methods to predict the fate of selected trace elements in integrated gasification combined cycle (IGCC) and integrated gasification fuel cell (IGFC) systems to aid in the development of methods to control the emission of trace elements determined to be air toxics. The goal of this project is to identify the effects of critical chemical and physical transformations associated with trace element behavior in IGCC and IGFC systems. The trace elements included in this project are arsenic, chromium, cadmium, mercury, nickel, selenium, and lead. The research seeks to identify and fill, experimentally and/or theoretically, data gaps that currently exist on the fate and composition of trace elements. The specific objectives are to (1) review the existing literature to identify the type and quantity of trace elements from coal gasification systems, (2) perform laboratory-scale experimentation and computer modeling to enable prediction of trace element emissions, and (3) identify methods to control trace element emissions.

  4. Product Aspect Clustering by Incorporating Background Knowledge for Opinion Mining

    PubMed Central

    Chen, Yiheng; Zhao, Yanyan; Qin, Bing; Liu, Ting

    2016-01-01

    Product aspect recognition is a key task in fine-grained opinion mining. Current methods primarily focus on the extraction of aspects from the product reviews. However, it is also important to cluster synonymous extracted aspects into the same category. In this paper, we focus on the problem of product aspect clustering. The primary challenge is to properly cluster and generalize aspects that have similar meanings but different representations. To address this problem, we learn two types of background knowledge for each extracted aspect based on two types of effective aspect relations: relevant aspect relations and irrelevant aspect relations, which describe two different types of relationships between two aspects. Based on these two types of relationships, we can assign many relevant and irrelevant aspects into two different sets as the background knowledge to describe each product aspect. To obtain abundant background knowledge for each product aspect, we can enrich the available information with background knowledge from the Web. Then, we design a hierarchical clustering algorithm to cluster these aspects into different groups, in which aspect similarity is computed using the relevant and irrelevant aspect sets for each product aspect. Experimental results obtained in both camera and mobile phone domains demonstrate that the proposed product aspect clustering method based on two types of background knowledge performs better than the baseline approach without the use of background knowledge. Moreover, the experimental results also indicate that expanding the available background knowledge using the Web is feasible. PMID:27561001

  5. Product Aspect Clustering by Incorporating Background Knowledge for Opinion Mining.

    PubMed

    Chen, Yiheng; Zhao, Yanyan; Qin, Bing; Liu, Ting

    2016-01-01

    Product aspect recognition is a key task in fine-grained opinion mining. Current methods primarily focus on the extraction of aspects from the product reviews. However, it is also important to cluster synonymous extracted aspects into the same category. In this paper, we focus on the problem of product aspect clustering. The primary challenge is to properly cluster and generalize aspects that have similar meanings but different representations. To address this problem, we learn two types of background knowledge for each extracted aspect based on two types of effective aspect relations: relevant aspect relations and irrelevant aspect relations, which describe two different types of relationships between two aspects. Based on these two types of relationships, we can assign many relevant and irrelevant aspects into two different sets as the background knowledge to describe each product aspect. To obtain abundant background knowledge for each product aspect, we can enrich the available information with background knowledge from the Web. Then, we design a hierarchical clustering algorithm to cluster these aspects into different groups, in which aspect similarity is computed using the relevant and irrelevant aspect sets for each product aspect. Experimental results obtained in both camera and mobile phone domains demonstrate that the proposed product aspect clustering method based on two types of background knowledge performs better than the baseline approach without the use of background knowledge. Moreover, the experimental results also indicate that expanding the available background knowledge using the Web is feasible. PMID:27561001

  6. Progressive Damage Analysis of Laminated Composite (PDALC) (A Computational Model Implemented in the NASA COMET Finite Element Code). 2.0

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.; Lo, David C.; Allen, David H.

    1998-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged damage variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete listing of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occurs during the loading history. Residual strength predictions made with this information compared favorably with experimental measurements.

  7. Cortical Neural Computation by Discrete Results Hypothesis

    PubMed Central

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast

  8. In silico selection of an aptamer to estrogen receptor alpha using computational docking employing estrogen response elements as aptamer-alike molecules

    PubMed Central

    Ahirwar, Rajesh; Nahar, Smita; Aggarwal, Shikha; Ramachandran, Srinivasan; Maiti, Souvik; Nahar, Pradip

    2016-01-01

    Aptamers, the chemical-antibody substitute to conventional antibodies, are primarily discovered through SELEX technology involving multi-round selections and enrichment. Circumventing conventional methodology, here we report an in silico selection of aptamers to estrogen receptor alpha (ERα) using RNA analogs of human estrogen response elements (EREs). The inverted repeat nature of ERE and the ability to form stable hairpins were used as criteria to obtain aptamer-alike sequences. Near-native RNA analogs of selected single stranded EREs were modelled and their likelihood to emerge as ERα aptamer was examined using AutoDock Vina, HADDOCK and PatchDock docking. These in silico predictions were validated by measuring the thermodynamic parameters of ERα -RNA interactions using isothermal titration calorimetry. Based on the in silico and in vitro results, we selected a candidate RNA (ERaptR4; 5′-GGGGUCAAGGUGACCCC-3′) having a binding constant (Ka) of 1.02 ± 0.1 × 108 M−1 as an ERα-aptamer. Target-specificity of the selected ERaptR4 aptamer was confirmed through cytochemistry and solid-phase immunoassays. Furthermore, stability analyses identified ERaptR4 resistant to serum and RNase A degradation in presence of ERα. Taken together, an efficient ERα-RNA aptamer is identified using a non-SELEX procedure of aptamer selection. The high-affinity and specificity can be utilized in detection of ERα in breast cancer and related diseases. PMID:26899418

  9. Approaches to high aspect ratio triangulations

    NASA Technical Reports Server (NTRS)

    Posenau, M.-A.

    1993-01-01

    In aerospace computational fluid dynamics calculations, high aspect ratio, or stretched, triangulations are necessary to adequately resolve the features of a viscous flow around bodies. In this paper, we explore alternatives to the Delaunay triangulation which can be used to generate high aspect ratio triangulations of point sets. The method is based on a variation of the lifting map concept which derives Delaunay triangulations from convex hull calculations.

  10. Elemental health

    SciTech Connect

    Tonneson, L.C.

    1997-01-01

    Trace elements used in nutritional supplements and vitamins are discussed in the article. Relevant studies are briefly cited regarding the health effects of selenium, chromium, germanium, silicon, zinc, magnesium, silver, manganese, ruthenium, lithium, and vanadium. The toxicity and food sources are listed for some of the elements. A brief summary is also provided of the nutritional supplements market.

  11. Finite element analysis of an inflatable torus considering air mass structural element

    NASA Astrophysics Data System (ADS)

    Gajbhiye, S. C.; Upadhyay, S. H.; Harsha, S. P.

    2014-01-01

    Inflatable structures, also known as gossamer structures, are at high boom in the current space technology due to their low mass and compact size comparing to the traditional spacecraft designing. Internal pressure becomes the major source of strength and rigidity, essentially stiffen the structure. However, inflatable space based membrane structure are at high risk to the vibration disturbance due to their low structural stiffness and material damping. Hence, the vibration modes of the structure should be known to a high degree of accuracy in order to provide better control authority. In the past, most of the studies conducted on the vibration analysis of gossamer structures used inaccurate or approximate theories in modeling the internal pressure. The toroidal shaped structure is one of the important key element in space application, helps to support the reflector in space application. This paper discusses the finite-element analysis of an inflated torus. The eigen-frequencies are obtained via three-dimensional small-strain elasticity theory, based on extremum energy principle. The two finite-element model (model-1 and model-2) have cases have been generated using a commercial finite-element package. The structure model-1 with shell element and model-2 with the combination of the mass of enclosed fluid (air) added to the shell elements have been taken for the study. The model-1 is computed with present analytical approach to understand the convergence rate and the accuracy. The convergence study is made available for the symmetric modes and anti-symmetric modes about the centroidal-axis plane, meeting the eigen-frequencies of an inflatable torus with the circular cross section. The structural model-2 is introduced with air mass element and analyzed its eigen-frequency with different aspect ratio and mode shape response using in-plane and out-plane loading condition are studied.

  12. Further finite element analyses of fully developed laminar flow of power-law non-Newtonian fluid in rectangular ducts: Heat transfer predictions

    SciTech Connect

    Syrjaelae, S.

    1996-10-01

    Forced convection heat transfer to hydrodynamically and thermally fully developed laminar flow of power-law non-Newtonian fluid in rectangular ducts has been studied for the H1 and T thermal boundary conditions. The solutions for the velocity and temperature fields were obtained numerically using the finite element method with quartic triangular elements. From these solutions, very accurate Nusselt number values were determined. Computations were performed over a range of power-law indices and duct aspect ratios.

  13. How to determine spiral bevel gear tooth geometry for finite element analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1991-01-01

    An analytical method was developed to determine gear tooth surface coordinates of face milled spiral bevel gears. The method combines the basic gear design parameters with the kinematical aspects for spiral bevel gear manufacturing. A computer program was developed to calculate the surface coordinates. From this data a 3-D model for finite element analysis can be determined. Development of the modeling method and an example case are presented.

  14. Extreme Low Aspect Ratio Stellarators

    NASA Astrophysics Data System (ADS)

    Moroz, Paul

    1997-11-01

    Recently proposed Spherical Stellarator (SS) concept [1] includes the devices with stellarator features and low aspect ratio, A <= 3.5, which is very unusual for stellarators (typical stellarators have A ≈ 7-10 or above). Strong bootstrap current and high-β equilibria are two distinguished elements of the SS concept leading to compact, steady-state, and efficient fusion reactor. Different coil configurations advantageous for the SS have been identified and analyzed [1-6]. In this report, we will present results on novel stellarator configurations which are unusual even for the SS approach. These are the extreme-low-aspect-ratio-stellarators (ELARS), with the aspect ratio A ≈ 1. We succeeded in finding ELARS configurations with extremely compact, modular, and simple design compatible with significant rotational transform (ι ≈ 0.1 - 0.15), large plasma volume, and good particle transport characteristics. [1] P.E. Moroz, Phys. Rev. Lett. 77, 651 (1996); [2] P.E. Moroz, Phys. Plasmas 3, 3055 (1996); [3] P.E. Moroz, D.B. Batchelor et al., Fusion Tech. 30, 1347 (1996); [4] P.E. Moroz, Stellarator News 48, 2 (1996); [5] P.E. Moroz, Plasma Phys. Reports 23, 502 (1997); [6] P.E. Moroz, Nucl. Fusion 37, No. 8 (1997). *Supported by DOE Grant No. DE-FG02-97ER54395.

  15. Psychosomatic Aspects of Cancer: An Overview.

    ERIC Educational Resources Information Center

    Murray, John B.

    1980-01-01

    It is suggested in this literature review on the psychosomatic aspects of cancer that psychoanalytic interpretations which focused on intrapsychic elements have given way to considerations of rehabilitation and assistance with the complex emotional reactions of patients and their families to terminal illness and death. (Author/DB)

  16. Higher-order adaptive finite-element methods for orbital-free density functional theory

    SciTech Connect

    Motamarri, Phani; Iyer, Mrinal; Knap, Jaroslaw; Gavini, Vikram

    2012-08-15

    In the present work, we study various numerical aspects of higher-order finite-element discretizations of the non-linear saddle-point formulation of orbital-free density-functional theory. We first investigate the robustness of viable solution schemes by analyzing the solvability conditions of the discrete problem. We find that a staggered solution procedure where the potential fields are computed consistently for every trial electron-density is a robust solution procedure for higher-order finite-element discretizations. We next study the convergence properties of higher-order finite-element discretizations of orbital-free density functional theory by considering benchmark problems that include calculations involving both pseudopotential as well as Coulomb singular potential fields. Our numerical studies suggest close to optimal rates of convergence on all benchmark problems for various orders of finite-element approximations considered in the present study. We finally investigate the computational efficiency afforded by various higher-order finite-element discretizations, which constitutes the main aspect of the present work, by measuring the CPU time for the solution of discrete equations on benchmark problems that include large Aluminum clusters. In these studies, we use mesh coarse-graining rates that are derived from error estimates and an a priori knowledge of the asymptotic solution of the far-field electronic fields. Our studies reveal a significant 100-1000 fold computational savings afforded by the use of higher-order finite-element discretization, alongside providing the desired chemical accuracy. We consider this study as a step towards developing a robust and computationally efficient discretization of electronic structure calculations using the finite-element basis.

  17. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  18. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  19. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  20. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  1. 49 CFR 236.526 - Roadway element not functioning properly.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Roadway element not functioning properly. 236.526... element not functioning properly. When a roadway element except track circuit of automatic train stop... roadway element shall be caused manually to display its most restrictive aspect until such element...

  2. Elements of oil-tanker transportation

    SciTech Connect

    Marks, A.

    1982-01-01

    Historical, economic, and statistical aspects of oil tanker transportation are discussed. In addition, oil tanker applied technology using a Hewlett-Packard 67 calculator is detailed. HP-67 programs are given in addition to theoretical formulas, references and examples need to solve the equations using any calculator. The contents include: berthing energy computation; Poisson distribution computation for estimating berth requirements; ship collision probability computation; spill risk analysis; oil spill movement computation; tanker characteristic computations; and ASTM measurement computations. (JMT)

  3. Computing and Digital Media: A Subject-Based Aspect Report by Education Scotland on Provision in Scotland's Colleges on Behalf of the Scottish Funding Council. Transforming Lives through Learning

    ERIC Educational Resources Information Center

    Education Scotland, 2014

    2014-01-01

    This report evaluates college programmes which deliver education and training in computer and digital media technology, rather than in computer usage. The report evaluates current practice and identifies important areas for further development amongst practitioners. It provides case studies of effective practice and sets out recommendations for…

  4. Finite element shell instability analysis

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Formulation procedures and the associated computer program for finite element thin shell instability analysis are discussed. Data cover: (1) formulation of basic element relationships, (2) construction of solution algorithms on both the conceptual and algorithmic levels, and (3) conduction of numerical analyses to verify the accuracy and efficiency of the theory and related programs therein are described.

  5. CATIA - A computer aided design and manufacturing tridimensional system

    NASA Astrophysics Data System (ADS)

    Bernard, F.

    A properietary computer graphics-aided, three-dimensional interactive application (CATIA) design system is described. CATIA employs approximately 100 graphics displays, which are used by some 500 persons engaged in the definition of aircraft structures, structural strength analyses, the kinematic analysis of mobile elements, aerodynamic calculations, the choice of tooling in the machining of aircraft elements, and the programming of robotics. CATIA covers these diverse fields with a single data base. After a description of salient aspects of the system's hardware and software, graphics examples are given of the definition of curves, surfaces, complex volumes, and analytical tasks.

  6. Superheavy Elements

    ERIC Educational Resources Information Center

    Tsang, Chin Fu

    1975-01-01

    Discusses the possibility of creating elements with an atomic number of around 114. Describes the underlying physics responsible for the limited extent of the periodic table and enumerates problems that must be overcome in creating a superheavy nucleus. (GS)

  7. Elemental Education.

    ERIC Educational Resources Information Center

    Daniel, Esther Gnanamalar Sarojini; Saat, Rohaida Mohd.

    2001-01-01

    Introduces a learning module integrating three disciplines--physics, chemistry, and biology--and based on four elements: carbon, oxygen, hydrogen, and silicon. Includes atomic model and silicon-based life activities. (YDS)

  8. Aspects of Menu Design.

    ERIC Educational Resources Information Center

    Clark, Donald

    1986-01-01

    Explores the pros and cons of various computer menu layouts to be used with computer-assisted learning media. The importance of designing a screen menu that takes into consideration the student's learning style is emphasized. (Author/LRW)

  9. Computer surety: computer system inspection guidance. [Contains glossary

    SciTech Connect

    Not Available

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  10. CBV_ASPECTS Improvement over CT_ASPECTS on Determining Irreversible Ischemic Lesion Decreases over Time

    PubMed Central

    Padroni, Marina; Boned, Sandra; Ribó, Marc; Muchada, Marian; Rodriguez-Luna, David; Coscojuela, Pilar; Tomasello, Alejandro; Cabero, Jordi; Pagola, Jorge; Rodriguez-Villatoro, Noelia; Juega, Jesus M.; Sanjuan, Estela; Molina, Carlos A.; Rubiera, Marta

    2016-01-01

    The Alberta Stroke Program Early CT Score (ASPECTS) is a useful scoring system for assessing early ischemic signs on noncontrast computed tomography (CT). Cerebral blood volume (CBV) on CT perfusion defines the core lesion assumed to be irreversibly damaged. We aim to explore the advantages of CBV_ASPECTS over CT_ASPECTS in the prediction of final infarct volume according to time. Methods Consecutive patients with anterior circulation stroke who underwent endovascular reperfusion according to initial CT_ASPECTS ≥7 were studied. CBV_ASPECTS was assessed blindly later on. Recanalization was defined as thrombolysis in cerebral ischemia score 2b-3. Final infarct volumes were measured on follow-up imaging. We compared ASPECTS on CBV and CT images, and defined ASPECTS agreement as: CT_ASPECTS - CBV_ASPECTS ≤1. Results Sixty-five patients, with a mean age of 67 ± 14 years and a median National Institutes of Health Stroke Scale score of 16 (range 10–20), were studied. The recanalization rate was 78.5%. The median CT_ASPECTS was 9 (range 8–10), and the CBV_ASPECTS was 8 (range 8–10). The mean time from symptoms to CT was 219 ± 143 min. Fifty patients (76.9%) showed ASPECTS agreement. The ASPECTS difference was inversely correlated to the time from symptoms to CT (r = −0.36, p < 0.01). A ROC curve defined 120 min as the best cutoff point after which the ASPECTS difference becomes more frequently ≤1. After 120 min, 89.5% of the patients showed ASPECTS agreement (as compared with 37.5% for <120 min, p < 0.01). CBV_ASPECTS but not CT_ASPECTS correlated with final infarct (r = −0.33, p < 0.01). However, if CT was done >2 h after symptom onset, CT_ASPECTS also correlated to final infarct (r = −0.39, p = 0.01). Conclusions In acute stroke, CBV_ASPECTS correlates with the final infarct volume. However, when CT is performed after 120 min from symptom onset, CBV_ASPECTS does not add relevant information to CT_ASPECTS. PMID:27781042

  11. A deflation based parallel algorithm for spectral element solution of the incompressible Navier-Stokes equations

    SciTech Connect

    Fischer, P.F.

    1996-12-31

    Efficient solution of the Navier-Stokes equations in complex domains is dependent upon the availability of fast solvers for sparse linear systems. For unsteady incompressible flows, the pressure operator is the leading contributor to stiffness, as the characteristic propagation speed is infinite. In the context of operator splitting formulations, it is the pressure solve which is the most computationally challenging, despite its elliptic origins. We seek to improve existing spectral element iterative methods for the pressure solve in order to overcome the slow convergence frequently observed in the presence of highly refined grids or high-aspect ratio elements.

  12. Synthetic Cell Elements from Block Copolymers. Dynamic Aspects

    NASA Astrophysics Data System (ADS)

    Discher, Dennis

    2003-03-01

    Amphiphilic block copolymers can self-assemble in water into various stable morphologies which resemble key cell structures, notably filaments and membranes. Filamentous worms of copolymer, microns-long, will be introduced, and related dynamics of copolymer vesicle polymersomes will be detailed. Fluorescence visualization of single worms stretched under flow demonstrates their stability as well as a means to control flexibility. Polymersome membranes have been more thoroughly studied, especially copolymer molecular weight effects. We summarize results suggestive of a transition from Rouse-like behavior to entangled chains. Viewed together, the results ask the question: what physics are needed next to mimic cell activities such as crawling?

  13. Coping with Computing Success.

    ERIC Educational Resources Information Center

    Breslin, Richard D.

    Elements of computing success of Iona College, the challenges it currently faces, and the strategies conceived to cope with future computing needs are discussed. The college has mandated computer literacy for students and offers nine degrees in the computerized information system/management information system areas. Since planning is needed in…

  14. Aspects of Language

    ERIC Educational Resources Information Center

    Ullmann, Stephen

    1974-01-01

    Several aspects of language--code, relation of structure to meaning, creativity, capacity to influence thought--are discussed, as well as reasons for including foreign language study in school and university. (RM)

  15. Instructional Aspects of Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Pieters, Jules M., Ed.

    This collection contains three papers addressing the instructional aspects of intelligent tutoring systems (ITS): (1) "Some Experiences with Two Intelligent Tutoring Systems for Teaching Computer Programming: Proust and the LISP-Tutor" (van den Berg, Merrienboer, and Maaswinkel); (2) "Some Issues on the Construction of Cooperative ITS" (Kanselaar,…

  16. Key aspects of coronal heating

    NASA Astrophysics Data System (ADS)

    Klimchuk, James A.

    2015-04-01

    We highlight 10 key aspects of coronal heating that must be understood before we can consider the problem to be solved. (1) All coronal heating is impulsive. (2) The details of coronal heating matter. (3) The corona is filled with elemental magnetic stands. (4) The corona is densely populated with current sheets. (5) The strands must reconnect to prevent an infinite build-up of stress. (6) Nanoflares repeat with different frequencies. (7) What is the characteristic magnitude of energy release? (8) What causes the collective behaviour responsible for loops? (9) What are the onset conditions for energy release? (10) Chromospheric nanoflares are not a primary source of coronal plasma. Significant progress in solving the coronal heating problem will require coordination of approaches: observational studies, field-aligned hydrodynamic simulations, large-scale and localized three-dimensional magnetohydrodynamic simulations, and possibly also kinetic simulations. There is a unique value to each of these approaches, and the community must strive to coordinate better.

  17. Key Aspects of Coronal Heating

    NASA Astrophysics Data System (ADS)

    Klimchuk, James A.

    2015-04-01

    We highlight ten key aspects of coronal heating that must be understood before we can consider the problem to be solved. (1) All coronal heating is impulsive. (2) The details of coronal heating matter. (3) The corona is filled with elemental magnetic stands. (4) The corona is densely populated with current sheets. (5) The strands must reconnect to prevent an infinite buildup of stress. (6) Nanoflares repeat with different frequencies. (7) What is the characteristic magnitude of energy release? (8) What causes the collective behavior responsible for loops? (9) What are the onset conditions for energy release? (10) Chromospheric nanoflares are not a primary source of coronal plasma. Significant progress in solving the coronal heating problem will require a coordination of approaches: observational studies, field-aligned hydrodynamic simulations, large-scale and localized 3D MHD simulations, and possibly also kinetic simulations. There is a unique value to each of these approaches, and the community must strive to coordinate better.

  18. A method for determining spiral-bevel gear tooth geometry for finite element analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1991-01-01

    An analytical method was developed to determine gear tooth surface coordinates of face-milled spiral bevel gears. The method uses the basic gear design parameters in conjunction with the kinematical aspects of spiral bevel gear manufacturing machinery. A computer program, SURFACE, was developed. The computer program calculates the surface coordinates and outputs 3-D model data that can be used for finite element analysis. Development of the modeling method and an example case are presented. This analysis method could also find application for gear inspection and near-net-shape gear forging die design.

  19. Computer-aided design and computer science technology

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Voigt, S. J.

    1976-01-01

    A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.

  20. FUEL ELEMENT

    DOEpatents

    Bean, R.W.

    1963-11-19

    A ceramic fuel element for a nuclear reactor that has improved structural stability as well as improved cooling and fission product retention characteristics is presented. The fuel element includes a plurality of stacked hollow ceramic moderator blocks arranged along a tubular raetallic shroud that encloses a series of axially apertured moderator cylinders spaced inwardly of the shroud. A plurality of ceramic nuclear fuel rods are arranged in the annular space between the shroud and cylinders of moderator and appropriate support means and means for directing gas coolant through the annular space are also provided. (AEC)