Science.gov

Sample records for element computational aspects

  1. On current aspects of finite element computational fluid mechanics for turbulent flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1982-01-01

    A set of nonlinear partial differential equations suitable for the description of a class of turbulent three-dimensional flow fields in select geometries is identified. On the basis of the concept of enforcing a penalty constraint to ensure accurate accounting of ordering effects, a finite element numerical solution algorithm is established for the equation set and the theoretical aspects of accuracy, convergence and stability are identified and quantized. Hypermatrix constructions are used to formulate the reduction of the computational aspects of the theory to practice. The robustness of the algorithm, and the computer program embodiment, have been verified for pertinent flow configurations.

  2. Computational aspects of heat transfer in structures via transfinite element formulations

    NASA Technical Reports Server (NTRS)

    Tamma, K. K.; Railkar, S.

    1986-01-01

    The paper presents a generalized Transform Method based Finite Element methodology for thermal analysis with emphasis on the computational aspects of heat transfer in structures. The purpose of this paper is to present an alternate methodology for thermal analysis of structures and therein outline the advantages of the approach in comparison with conventional finite element schemes and existing practices. The overall goals of the research, however, are aimed first toward enhanced thermal formulations and therein to provide avenues for subsequent interdisciplinary thermal/structural analysis via a common numerical methodology. Basic concepts of the approach for thermal analysis is described with emphasis on a Laplace Transform based finite element methodology. Highlights and characteristic features of the approach are described via generalized formulations and applications to several problems. Results obtained demonstrate excellent agreement in comparison with analytic and/or conventional finite element solutions with savings in computational times and model sizes. Potential of the approach for interdisciplinary thermal/structural problems are also identified.

  3. Computational Aspects of the h, p and h-p Versions of the Finite Element Method.

    DTIC Science & Technology

    1987-03-01

    and T. Scapolla BN-1 061 March 1987 %" iqAppr,- - " . ..utr""o":" S Appr..: ’ ...... tor pilbhc j0 ,oe "IDI .. ,%~~~~1 %-... z u n Unliz~ited INSTITUTE ...PROGRAM ELEMENT. PROJECT. TASK AREA & WORK UNIT NUMIERS Institute for Physical Science and Technology University of Maryland College Park, MD 20742 it...the h, p and h-p versions of the finite element method Ivo Babulka 1" Institute for Physical Science and Technology University of Maryland, College

  4. Terminological aspects of data elements

    SciTech Connect

    Strehlow, R.A. ); Kenworthey, W.H. Jr. ); Schuldt, R.E. )

    1991-01-01

    The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language to a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.

  5. Finite element computational fluid mechanics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1983-01-01

    Finite element analysis as applied to the broad spectrum of computational fluid mechanics is analyzed. The finite element solution methodology is derived, developed, and applied directly to the differential equation systems governing classes of problems in fluid mechanics. The heat conduction equation is used to reveal the essence and elegance of finite element theory, including higher order accuracy and convergence. The algorithm is extended to the pervasive nonlinearity of the Navier-Stokes equations. A specific fluid mechanics problem class is analyzed with an even mix of theory and applications, including turbulence closure and the solution of turbulent flows.

  6. Element-topology-independent preconditioners for parallel finite element computations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alexander, Scott

    1992-01-01

    A family of preconditioners for the solution of finite element equations are presented, which are element-topology independent and thus can be applicable to element order-free parallel computations. A key feature of the present preconditioners is the repeated use of element connectivity matrices and their left and right inverses. The properties and performance of the present preconditioners are demonstrated via beam and two-dimensional finite element matrices for implicit time integration computations.

  7. Conceptual aspects of geometric quantum computation

    NASA Astrophysics Data System (ADS)

    Sjöqvist, Erik; Azimi Mousolou, Vahid; Canali, Carlo M.

    2016-10-01

    Geometric quantum computation is the idea that geometric phases can be used to implement quantum gates, i.e., the basic elements of the Boolean network that forms a quantum computer. Although originally thought to be limited to adiabatic evolution, controlled by slowly changing parameters, this form of quantum computation can as well be realized at high speed by using nonadiabatic schemes. Recent advances in quantum gate technology have allowed for experimental demonstrations of different types of geometric gates in adiabatic and nonadiabatic evolution. Here, we address some conceptual issues that arise in the realizations of geometric gates. We examine the appearance of dynamical phases in quantum evolution and point out that not all dynamical phases need to be compensated for in geometric quantum computation. We delineate the relation between Abelian and non-Abelian geometric gates and find an explicit physical example where the two types of gates coincide. We identify differences and similarities between adiabatic and nonadiabatic realizations of quantum computation based on non-Abelian geometric phases.

  8. Algebraic aspects of the computably enumerable degrees.

    PubMed Central

    Slaman, T A; Soare, R I

    1995-01-01

    A set A of nonnegative integers is computably enumerable (c.e.), also called recursively enumerable (r.e.), if there is a computable method to list its elements. The class of sets B which contain the same information as A under Turing computability (elements, whether every embedding of P into can be extended to an embedding of Q into R. Many of the most significant theorems giving an algebraic insight into R have asserted either extension or nonextension of embeddings. We extend and unify these results and their proofs to produce complete and complementary criteria and techniques to analyze instances of extension and nonextension. We conclude that the full extension of embedding problem is decidable. PMID:11607508

  9. Computer Security: The Human Element.

    ERIC Educational Resources Information Center

    Guynes, Carl S.; Vanacek, Michael T.

    1981-01-01

    The security and effectiveness of a computer system are dependent on the personnel involved. Improved personnel and organizational procedures can significantly reduce the potential for computer fraud. (Author/MLF)

  10. Computational and Practical Aspects of Drug Repositioning.

    PubMed

    Oprea, Tudor I; Overington, John P

    2015-01-01

    The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the "barrier to entry" is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme-drug repositioning evidence level (DREL)-for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed.

  11. Dedicated breast computed tomography: Basic aspects

    SciTech Connect

    Sarno, Antonio; Mettivier, Giovanni Russo, Paolo

    2015-06-15

    X-ray mammography of the compressed breast is well recognized as the “gold standard” for early detection of breast cancer, but its performance is not ideal. One limitation of screening mammography is tissue superposition, particularly for dense breasts. Since 2001, several research groups in the USA and in the European Union have developed computed tomography (CT) systems with digital detector technology dedicated to x-ray imaging of the uncompressed breast (breast CT or BCT) for breast cancer screening and diagnosis. This CT technology—tracing back to initial studies in the 1970s—allows some of the limitations of mammography to be overcome, keeping the levels of radiation dose to the radiosensitive breast glandular tissue similar to that of two-view mammography for the same breast size and composition. This paper presents an evaluation of the research efforts carried out in the invention, development, and improvement of BCT with dedicated scanners with state-of-the-art technology, including initial steps toward commercialization, after more than a decade of R and D in the laboratory and/or in the clinic. The intended focus here is on the technological/engineering aspects of BCT and on outlining advantages and limitations as reported in the related literature. Prospects for future research in this field are discussed.

  12. Computational and Practical Aspects of Drug Repositioning

    PubMed Central

    Oprea, Tudor I.

    2015-01-01

    Abstract The concept of the hypothesis-driven or observational-based expansion of the therapeutic application of drugs is very seductive. This is due to a number of factors, such as lower cost of development, higher probability of success, near-term clinical potential, patient and societal benefit, and also the ability to apply the approach to rare, orphan, and underresearched diseases. Another highly attractive aspect is that the “barrier to entry” is low, at least in comparison to a full drug discovery operation. The availability of high-performance computing, and databases of various forms have also enhanced the ability to pose reasonable and testable hypotheses for drug repurposing, rescue, and repositioning. In this article we discuss several factors that are currently underdeveloped, or could benefit from clearer definition in articles presenting such work. We propose a classification scheme—drug repositioning evidence level (DREL)—for all drug repositioning projects, according to the level of scientific evidence. DREL ranges from zero, which refers to predictions that lack any experimental support, to four, which refers to drugs approved for the new indication. We also present a set of simple concepts that can allow rapid and effective filtering of hypotheses, leading to a focus on those that are most likely to lead to practical safe applications of an existing drug. Some promising repurposing leads for malaria (DREL-1) and amoebic dysentery (DREL-2) are discussed. PMID:26241209

  13. Impact of new computing systems on finite element computations

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  14. Some Aspects of Parallel Implementation of the Finite Element Method on Message Passing Architectures

    DTIC Science & Technology

    1988-05-01

    Method on Message Passing Architecturest I. Babuvka Department of Mathematics and 4 Institute for Physical Science and Technology S H. C. Elman Institute ...for Advanced Computer Studies and Department of Computer Science University of Maryland College Park, MD 20742 4, ABSTRACT We discuss some aspects of...ORGANIZATION NAME AND ADDRESS 10. PRO0GRAM ELEMENT. PROJECT. TASKC Depart. of Math and 21nst. for Advanced AE OKUI UBR Institute for Physical Science

  15. Computational Aspects of Heat Transfer in Structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M. (Compiler)

    1982-01-01

    Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.

  16. Computing aspects of power for multiple regression.

    PubMed

    Dunlap, William P; Xin, Xue; Myers, Leann

    2004-11-01

    Rules of thumb for power in multiple regression research abound. Most such rules dictate the necessary sample size, but they are based only upon the number of predictor variables, usually ignoring other critical factors necessary to compute power accurately. Other guides to power in multiple regression typically use approximate rather than precise equations for the underlying distribution; entail complex preparatory computations; require interpolation with tabular presentation formats; run only under software such as Mathmatica or SAS that may not be immediately available to the user; or are sold to the user as parts of power computation packages. In contrast, the program we offer herein is immediately downloadable at no charge, runs under Windows, is interactive, self-explanatory, flexible to fit the user's own regression problems, and is as accurate as single precision computation ordinarily permits.

  17. Security Aspects of Computer Supported Collaborative Work

    DTIC Science & Technology

    1993-09-01

    its enabling software. CSCW has been described by some as computer- based tools which can be used to facilitate the exchange and sharing of...information by work groups. Others have described it as a computer- based shared environment that supports two or more users. [Bock92] CSCW is a rapidly...Groupware applications according to the type of work they are designed 6 to accomplish. Based on this first criteria, they recognize four general classes

  18. Analytical and Computational Aspects of Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Bilevel problem formulations have received considerable attention as an approach to multidisciplinary optimization in engineering. We examine the analytical and computational properties of one such approach, collaborative optimization. The resulting system-level optimization problems suffer from inherent computational difficulties due to the bilevel nature of the method. Most notably, it is impossible to characterize and hence identify solutions of the system-level problems because the standard first-order conditions for solutions of constrained optimization problems do not hold. The analytical features of the system-level problem make it difficult to apply conventional nonlinear programming algorithms. Simple examples illustrate the analysis and the algorithmic consequences for optimization methods. We conclude with additional observations on the practical implications of the analytical and computational properties of collaborative optimization.

  19. Central control element expands computer capability

    NASA Technical Reports Server (NTRS)

    Easton, R. A.

    1975-01-01

    Redundant processing and multiprocessing modes can be obtained from one computer by using logic configuration. Configuration serves as central control element which can automatically alternate between high-capacity multiprocessing mode and high-reliability redundant mode using dynamic mode switching in real time.

  20. Parallel computation with the spectral element method

    SciTech Connect

    Ma, Hong

    1995-12-01

    Spectral element models for the shallow water equations and the Navier-Stokes equations have been successfully implemented on a data parallel supercomputer, the Connection Machine model CM-5. The nonstaggered grid formulations for both models are described, which are shown to be especially efficient in data parallel computing environment.

  1. Plane Smoothers for Multiblock Grids: Computational Aspects

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane

    1999-01-01

    Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.

  2. Programmable computing with a single magnetoresistive element

    NASA Astrophysics Data System (ADS)

    Ney, A.; Pampuch, C.; Koch, R.; Ploog, K. H.

    2003-10-01

    The development of transistor-based integrated circuits for modern computing is a story of great success. However, the proved concept for enhancing computational power by continuous miniaturization is approaching its fundamental limits. Alternative approaches consider logic elements that are reconfigurable at run-time to overcome the rigid architecture of the present hardware systems. Implementation of parallel algorithms on such `chameleon' processors has the potential to yield a dramatic increase of computational speed, competitive with that of supercomputers. Owing to their functional flexibility, `chameleon' processors can be readily optimized with respect to any computer application. In conventional microprocessors, information must be transferred to a memory to prevent it from getting lost, because electrically processed information is volatile. Therefore the computational performance can be improved if the logic gate is additionally capable of storing the output. Here we describe a simple hardware concept for a programmable logic element that is based on a single magnetic random access memory (MRAM) cell. It combines the inherent advantage of a non-volatile output with flexible functionality which can be selected at run-time to operate as an AND, OR, NAND or NOR gate.

  3. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect

    Ravindrudu, Rahul

    2004-01-01

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  4. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  5. Aerodynamic Properties of Rough Surfaces with High Aspect-Ratio Roughness Elements: Effect of Aspect Ratio and Arrangements

    NASA Astrophysics Data System (ADS)

    Sadique, Jasim; Yang, Xiang I. A.; Meneveau, Charles; Mittal, Rajat

    2016-12-01

    We examine the effect of varying roughness-element aspect ratio on the mean velocity distributions of turbulent flow over arrays of rectangular-prism-shaped elements. Large-eddy simulations (LES) in conjunction with a sharp-interface immersed boundary method are used to simulate spatially-growing turbulent boundary layers over these rough surfaces. Arrays of aligned and staggered rectangular roughness elements with aspect ratio >1 are considered. First the temporally- and spatially-averaged velocity profiles are used to illustrate the aspect-ratio effects. For aligned prisms, the roughness length (z_o ) and the friction velocity (u_* ) increase initially with an increase in the roughness-element aspect ratio, until the values reach a plateau at a particular aspect ratio. The exact value of this aspect ratio depends on the coverage density. Further increase in the aspect ratio changes neither z_o , u_* nor the bulk flow above the roughness elements. For the staggered cases, z_o and u_* continue to increase for the surface coverage density and the aspect ratios investigated. To model the flow response to variations in roughness aspect ratio, we turn to a previously developed phenomenological volumetric sheltering model (Yang et al., in J Fluid Mech 789:127-165, 2016), which was intended for low to moderate aspect-ratio roughness elements. Here, we extend this model to account for high aspect-ratio roughness elements. We find that for aligned cases, the model predicts strong mutual sheltering among the roughness elements, while the effect is much weaker for staggered cases. The model-predicted z_o and u_* agree well with the LES results. Results show that the model, which takes explicit account of the mutual sheltering effects, provides a rapid and reliable prediction method of roughness effects in turbulent boundary-layer flows over arrays of rectangular-prism roughness elements.

  6. Some Aspects of Mathematics and Computer Science in Japan,

    DTIC Science & Technology

    Japan. In fact, he learned about a rather wide variety of research in various aspects of applied mathematics and computer science . The readers...Mathematics . Those interested in computer science and applications software will be most interested in the work at Fujitsu Limited and the work at the

  7. The case for biological quantum computer elements

    NASA Astrophysics Data System (ADS)

    Baer, Wolfgang; Pizzi, Rita

    2009-05-01

    An extension to vonNeumann's analysis of quantum theory suggests self-measurement is a fundamental process of Nature. By mapping the quantum computer to the brain architecture we will argue that the cognitive experience results from a measurement of a quantum memory maintained by biological entities. The insight provided by this mapping suggests quantum effects are not restricted to small atomic and nuclear phenomena but are an integral part of our own cognitive experience and further that the architecture of a quantum computer system parallels that of a conscious brain. We will then review the suggestions for biological quantum elements in basic neural structures and address the de-coherence objection by arguing for a self- measurement event model of Nature. We will argue that to first order approximation the universe is composed of isolated self-measurement events which guaranties coherence. Controlled de-coherence is treated as the input/output interactions between quantum elements of a quantum computer and the quantum memory maintained by biological entities cognizant of the quantum calculation results. Lastly we will present stem-cell based neuron experiments conducted by one of us with the aim of demonstrating the occurrence of quantum effects in living neural networks and discuss future research projects intended to reach this objective.

  8. Finite element concepts in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    Finite element theory was employed to establish an implicit numerical solution algorithm for the time averaged unsteady Navier-Stokes equations. Both the multidimensional and a time-split form of the algorithm were considered, the latter of particular interest for problem specification on a regular mesh. A Newton matrix iteration procedure is outlined for solving the resultant nonlinear algebraic equation systems. Multidimensional discretization procedures are discussed with emphasis on automated generation of specific nonuniform solution grids and accounting of curved surfaces. The time-split algorithm was evaluated with regards to accuracy and convergence properties for hyperbolic equations on rectangular coordinates. An overall assessment of the viability of the finite element concept for computational aerodynamics is made.

  9. Physical aspects of computing the flow of a viscous fluid

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1984-01-01

    One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.

  10. HYDRA, A finite element computational fluid dynamics code: User manual

    SciTech Connect

    Christon, M.A.

    1995-06-01

    HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.

  11. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  12. On Undecidability Aspects of Resilient Computations and Implications to Exascale

    SciTech Connect

    Rao, Nageswara S

    2014-01-01

    Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classes of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.

  13. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  14. Algebraic and Computational Aspects of Network Reliability and Problems.

    DTIC Science & Technology

    1986-07-15

    7 -A175 075 ALGEBRAIC AND COMPUTATIONAL ASPECTS OF KETUORK / IRELIABILITY AND PROBLEMS(U) CLEMSON UNIV SC D SHIER 15 JUL 86 AFOSR-TR-86-2115 AFOSR...MONITORING ORGANIZATION I, afpplhcable) Clemson University AFOSR/NM 6C. ADDRESS (City. State and ZIP Codej 7b. ADDRESS (City. State and ZIP Code) Mlartin...Hall Bldg 410 Clemson , SC 29634-1907 Bolling AFB OC 20332-6448 S& NAME OF FUNOING/SPONSORING Bb. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION

  15. Mathematical Aspects of Finite Element Methods for Incompressible Viscous Flows.

    DTIC Science & Technology

    1986-09-01

    irteir-rt In element pa ir 1: is je Tnit’ by f i F-t mii iidi rig * % % % % % % - 4* % VV 4 ~ % - ~ * .. . * *. PA - 33- Q into rectangular prisms , or...mtr.o gerier-il Iv, lrit h.-t ,- For the prpsstirp sputi-P w’e choose~ pi.p’ievi so. -- u t subregions. We subdi vIde each rectangular prism into 24 tetr...8217 Unfortunately, these boundary conditions have no PhV.- tico . meaninq. Thus the choice (4.5.1), or equivalently (4.10.1,, can only be used in conjunction

  16. Control aspects of quantum computing using pure and mixed states.

    PubMed

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J

    2012-10-13

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.

  17. Computers in the Library: The Human Element.

    ERIC Educational Resources Information Center

    Magrath, Lynn L.

    1982-01-01

    Discusses library staff and public reaction to the computerization of library operations at the Pikes Peak Library District in Colorado Springs. An outline of computer applications implemented since the inception of the program in 1975 is included. (EJS)

  18. Cohesive surface model for fracture based on a two-scale formulation: computational implementation aspects

    NASA Astrophysics Data System (ADS)

    Toro, S.; Sánchez, P. J.; Podestá, J. M.; Blanco, P. J.; Huespe, A. E.; Feijóo, R. A.

    2016-10-01

    The paper describes the computational aspects and numerical implementation of a two-scale cohesive surface methodology developed for analyzing fracture in heterogeneous materials with complex micro-structures. This approach can be categorized as a semi-concurrent model using the representative volume element concept. A variational multi-scale formulation of the methodology has been previously presented by the authors. Subsequently, the formulation has been generalized and improved in two aspects: (i) cohesive surfaces have been introduced at both scales of analysis, they are modeled with a strong discontinuity kinematics (new equations describing the insertion of the macro-scale strains, into the micro-scale and the posterior homogenization procedure have been considered); (ii) the computational procedure and numerical implementation have been adapted for this formulation. The first point has been presented elsewhere, and it is summarized here. Instead, the main objective of this paper is to address a rather detailed presentation of the second point. Finite element techniques for modeling cohesive surfaces at both scales of analysis (FE^2 approach) are described: (i) finite elements with embedded strong discontinuities are used for the macro-scale simulation, and (ii) continuum-type finite elements with high aspect ratios, mimicking cohesive surfaces, are adopted for simulating the failure mechanisms at the micro-scale. The methodology is validated through numerical simulation of a quasi-brittle concrete fracture problem. The proposed multi-scale model is capable of unveiling the mechanisms that lead from the material degradation phenomenon at the meso-structural level to the activation and propagation of cohesive surfaces at the structural scale.

  19. Higher-Order Finite Elements for Computing Thermal Radiation

    NASA Technical Reports Server (NTRS)

    Gould, Dana C.

    2004-01-01

    Two variants of the finite-element method have been developed for use in computational simulations of radiative transfers of heat among diffuse gray surfaces. Both variants involve the use of higher-order finite elements, across which temperatures and radiative quantities are assumed to vary according to certain approximations. In this and other applications, higher-order finite elements are used to increase (relative to classical finite elements, which are assumed to be isothermal) the accuracies of final numerical results without having to refine computational meshes excessively and thereby incur excessive computation times. One of the variants is termed the radiation sub-element (RSE) method, which, itself, is subject to a number of variations. This is the simplest and most straightforward approach to representation of spatially variable surface radiation. Any computer code that, heretofore, could model surface-to-surface radiation can incorporate the RSE method without major modifications. In the basic form of the RSE method, each finite element selected for use in computing radiative heat transfer is considered to be a parent element and is divided into sub-elements for the purpose of solving the surface-to-surface radiation-exchange problem. The sub-elements are then treated as classical finite elements; that is, they are assumed to be isothermal, and their view factors and absorbed heat fluxes are calculated accordingly. The heat fluxes absorbed by the sub-elements are then transferred back to the parent element to obtain a radiative heat flux that varies spatially across the parent element. Variants of the RSE method involve the use of polynomials to interpolate and/or extrapolate to approximate spatial variations of physical quantities. The other variant of the finite-element method is termed the integration method (IM). Unlike in the RSE methods, the parent finite elements are not subdivided into smaller elements, and neither isothermality nor other

  20. Adaptive Finite-Element Computation In Fracture Mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1995-01-01

    Report discusses recent progress in use of solution-adaptive finite-element computational methods to solve two-dimensional problems in linear elastic fracture mechanics. Method also shown extensible to three-dimensional problems.

  1. Computational aspects of steel fracturing pertinent to naval requirements.

    PubMed

    Matic, Peter; Geltmacher, Andrew; Rath, Bhakta

    2015-03-28

    Modern high strength and ductile steels are a key element of US Navy ship structural technology. The development of these alloys spurred the development of modern structural integrity analysis methods over the past 70 years. Strength and ductility provided the designers and builders of navy surface ships and submarines with the opportunity to reduce ship structural weight, increase hull stiffness, increase damage resistance, improve construction practices and reduce maintenance costs. This paper reviews how analytical and computational tools, driving simulation methods and experimental techniques, were developed to provide ongoing insights into the material, damage and fracture characteristics of these alloys. The need to understand alloy fracture mechanics provided unique motivations to measure and model performance from structural to microstructural scales. This was done while accounting for the highly nonlinear behaviours of both materials and underlying fracture processes. Theoretical methods, data acquisition strategies, computational simulation and scientific imaging were applied to increasingly smaller scales and complex materials phenomena under deformation. Knowledge gained about fracture resistance was used to meet minimum fracture initiation, crack growth and crack arrest characteristics as part of overall structural integrity considerations.

  2. Optically intraconnected computer employing dynamically reconfigurable holographic optical element

    NASA Technical Reports Server (NTRS)

    Bergman, Larry A. (Inventor)

    1992-01-01

    An optically intraconnected computer and a reconfigurable holographic optical element employed therein. The basic computer comprises a memory for holding a sequence of instructions to be executed; logic for accessing the instructions in sequence; logic for determining for each the instruction the function to be performed and the effective address thereof; a plurality of individual elements on a common support substrate optimized to perform certain logical sequences employed in executing the instructions; and, element selection logic connected to the logic determining the function to be performed for each the instruction for determining the class of each function and for causing the instruction to be executed by those the elements which perform those associated the logical sequences affecting the instruction execution in an optimum manner. In the optically intraconnected version, the element selection logic is adapted for transmitting and switching signals to the elements optically.

  3. A computer graphics program for general finite element analyses

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Sawyer, L. M.

    1978-01-01

    Documentation for a computer graphics program for displays from general finite element analyses is presented. A general description of display options and detailed user instructions are given. Several plots made in structural, thermal and fluid finite element analyses are included to illustrate program options. Sample data files are given to illustrate use of the program.

  4. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  5. The Impact of Instructional Elements in Computer-Based Instruction

    ERIC Educational Resources Information Center

    Martin, Florence; Klein, James D.; Sullivan, Howard

    2007-01-01

    This study investigated the effects of several elements of instruction (objectives, information, practice, examples and review) when they were combined in a systematic manner. College students enrolled in a computer literacy course used one of six different versions of a computer-based lesson delivered on the web to learn about input, processing,…

  6. Acceleration of matrix element computations for precision measurements

    SciTech Connect

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; Ye, Zhenyu

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix element technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.

  7. Introducing the Practical Aspects of Computational Chemistry to Undergraduate Chemistry Students

    ERIC Educational Resources Information Center

    Pearson, Jason K.

    2007-01-01

    Various efforts are being made to introduce the different physical aspects and uses of computational chemistry to the undergraduate chemistry students. A new laboratory approach that demonstrates all such aspects via experiments has been devised for the purpose.

  8. Experiments and simulation models of a basic computation element of an autonomous molecular computing system.

    PubMed

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  9. An emulator for minimizing computer resources for finite element analysis

    NASA Technical Reports Server (NTRS)

    Melosh, R.; Utku, S.; Islam, M.; Salama, M.

    1984-01-01

    A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).

  10. Finite element dynamic analysis on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lambiotte, J. J., Jr.

    1978-01-01

    Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.

  11. Computational identification of transcriptional regulatory elements in DNA sequence

    PubMed Central

    GuhaThakurta, Debraj

    2006-01-01

    Identification and annotation of all the functional elements in the genome, including genes and the regulatory sequences, is a fundamental challenge in genomics and computational biology. Since regulatory elements are frequently short and variable, their identification and discovery using computational algorithms is difficult. However, significant advances have been made in the computational methods for modeling and detection of DNA regulatory elements. The availability of complete genome sequence from multiple organisms, as well as mRNA profiling and high-throughput experimental methods for mapping protein-binding sites in DNA, have contributed to the development of methods that utilize these auxiliary data to inform the detection of transcriptional regulatory elements. Progress is also being made in the identification of cis-regulatory modules and higher order structures of the regulatory sequences, which is essential to the understanding of transcription regulation in the metazoan genomes. This article reviews the computational approaches for modeling and identification of genomic regulatory elements, with an emphasis on the recent developments, and current challenges. PMID:16855295

  12. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  13. Use of boundary element methods in field emission computations

    SciTech Connect

    Hartman, R.L.; Mackie, W.A.; Davis, P.R.

    1994-03-01

    The boundary element method is well suited to deal with some potential field problems encountered in the context of field emission. A boundary element method is presented in the specific case of three-dimensional problems with azimuthal symmetry. As a check, computed results are displayed for well-known theoretical examples. The code is then employed to calculate current from a field emission tip and from the same tip with a protrusion. Finally an extension of the boundary element code is employed to calculate space-charge effects on emitted current. 13 refs., 5 figs., 1 tab.

  14. Computer programs for the Boltzmann collision matrix elements

    NASA Astrophysics Data System (ADS)

    Das, P.

    1989-09-01

    When the distribution function in the kinetic theory of gases is expanded in a basis of orthogonal functions, the Boltzmann collision operators can be evaluated in terms of appropriate matrix elements. These matrix elements are usually given in terms of highly complex algebraic expressions. When Burnett functions, which consist of Sonine polynomials and spherical harmonics, are used as the basis, the irreducible tensor formalism provides expressions for the matrix elements that are algebraically simple, possess high symmetry, and are computationally more economical than in any other basis. The package reported here consists of routines to compute such matrix elements in a Burnett function basis for a mixture of hard sphere gases, as also the loss integral of a Burnett mode and the functions themselves. The matrix elements involve the Clebsch-Gordan and Brody-Moshinsky coefficients, both of which are used here for unusually high values of their arguments. For the purpose of validation both coefficients are computed using two different methods. Though written for hard sphere molecules the package can, with only slight modification, be adapted to more general molecular models as well.

  15. On the effects of grid ill-conditioning in three dimensional finite element vector potential magnetostatic field computations

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1990-01-01

    The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.

  16. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  17. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  18. A bibliography on finite element and related methods analysis in reactor physics computations (1971--1997)

    SciTech Connect

    Carpenter, D.C.

    1998-01-01

    This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

  19. A computational study of nodal-based tetrahedral element behavior.

    SciTech Connect

    Gullerud, Arne S.

    2010-09-01

    This report explores the behavior of nodal-based tetrahedral elements on six sample problems, and compares their solution to that of a corresponding hexahedral mesh. The problems demonstrate that while certain aspects of the solution field for the nodal-based tetrahedrons provide good quality results, the pressure field tends to be of poor quality. Results appear to be strongly affected by the connectivity of the tetrahedral elements. Simulations that rely on the pressure field, such as those which use material models that are dependent on the pressure (e.g. equation-of-state models), can generate erroneous results. Remeshing can also be strongly affected by these issues. The nodal-based test elements as they currently stand need to be used with caution to ensure that their numerical deficiencies do not adversely affect critical values of interest.

  20. Massively parallel finite element computation of three dimensional flow problems

    NASA Astrophysics Data System (ADS)

    Tezduyar, T.; Aliabadi, S.; Behr, M.; Johnson, A.; Mittal, S.

    1992-12-01

    The parallel finite element computation of three-dimensional compressible, and incompressible flows, with emphasis on the space-time formulations, mesh moving schemes and implementations on the Connection Machines CM-200 and CM-5 are presented. For computation of unsteady compressible and incompressible flows involving moving boundaries and interfaces, the Deformable-Spatial-Domain/Stabilized-Space-Time (DSD/SST) formulation that previously developed are employed. In this approach, the stabilized finite element formulations of the governing equations are written over the space-time domain of the problem; therefore, the deformation of the spatial domain with respect to time is taken into account automatically. This approach gives the capability to solve a large class of problems involving free surfaces, moving interfaces, and fluid-structure and fluid-particle interactions. By using special mesh moving schemes, the frequency of remeshing is minimized to reduce the projection errors involved in remeshing and also to increase the parallelization ease of the computations. The implicit equation systems arising from the finite element discretizations are solved iteratively by using the GMRES update technique with the diagonal and nodal-block-diagonal preconditioners. These formulations have all been implemented on the CM-200 and CM-5, and have been applied to several large-scale problems. The three-dimensional problems in this report were all computed on the CM-200 and CM-5.

  1. Computational design aspects of a NASP nozzle/afterbody experiment

    NASA Technical Reports Server (NTRS)

    Ruffin, Stephen M.; Venkatapathy, Ethiraj; Keener, Earl R.; Nagaraj, N.

    1989-01-01

    This paper highlights the influence of computational methods on design of a wind tunnel experiment which generically models the nozzle/afterbody flow field of the proposed National Aerospace Plane. The rectangular slot nozzle plume flow field is computed using a three-dimensional, upwind, implicit Navier-Stokes solver. Freestream Mach numbers of 5.3, 7.3, and 10 are investigated. Two-dimensional parametric studies of various Mach numbers, pressure ratios, and ramp angles are used to help determine model loads and afterbody ramp angle and length. It was found that the center of pressure on the ramp occurs at nearly the same location for all ramp angles and test conditions computed. Also, to prevent air liquefaction, it is suggested that a helium-air mixture be used as the jet gas for the highest Mach number test case.

  2. A stochastic method for computing hadronic matrix elements

    DOE PAGES

    Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...

    2014-01-24

    In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.

  3. Transient Finite Element Computations on a Variable Transputer System

    NASA Technical Reports Server (NTRS)

    Smolinski, Patrick J.; Lapczyk, Ireneusz

    1993-01-01

    A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.

  4. The spectral-element method, Beowulf computing, and global seismology.

    PubMed

    Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

    2002-11-29

    The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content.

  5. Photodeposited diffractive optical elements of computer generated masks

    NASA Astrophysics Data System (ADS)

    Mirchin, N.; Peled, A.; Baal-Zedaka, I.; Margolin, R.; Zagon, M.; Lapsker, I.; Verdyan, A.; Azoulay, J.

    2005-07-01

    Diffractive optical elements (DOE) were synthesized on plastic substrates using the photodeposition (PD) technique by depositing amorphous selenium (a-Se) films with argon lasers and UV spectra light. The thin films were deposited typically onto polymethylmethacrylate (PMMA) substrates at room temperature. Scanned beam and contact mask modes were employed using computer-designed DOE lenses. Optical and electron micrographs characterize the surface details. The films were typically 200 nm thick.

  6. Single Photon Holographic Qudit Elements for Linear Optical Quantum Computing

    DTIC Science & Technology

    2011-05-01

    in optical volume holography and designed and simulated practical single-photon, single-optical elements for qudit MUB-state quantum in- formation...Independent of the representation we use, the MUB states will ordinarily be modulated in both amplitude and phase. Recently a practical method has been...quantum computing with qudits (d ≥ 3) has been an efficient and practical quantum state sorter for photons whose complex fields are modulated in both

  7. Technical Aspects of Computer-Assisted Instruction in Chinese.

    ERIC Educational Resources Information Center

    Cheng, Chin-Chaun; Sherwood, Bruce

    1981-01-01

    Computer assisted instruction in Chinese is considered in relation to the design and recognition of Chinese characters, speech synthesis of the standard Chinese language, and the identification of Chinese tone. The PLATO work has shifted its orientation from provision of supplementary courseware to implementation of independent lessons and…

  8. Computational Aspects of Realization & Design Algorithms in Linear Systems Theory.

    NASA Astrophysics Data System (ADS)

    Tsui, Chia-Chi

    Realization and design problems are two major problems in linear time-invariant systems control theory and have been solved theoretically. However, little is understood about their numerical properties. Due to the large scale of the problem and the finite precision of computer computation, it is very important and is the purpose of this study to investigate the computational reliability and efficiency of the algorithms for these two problems. In this dissertation, a reliable algorithm to achieve canonical form realization via Hankel matrix is developed. A comparative study of three general realization algorithms, for both numerical reliability and efficiency, shows that the proposed algorithm (via Hankel matrix) is the most preferable one among the three. The design problems, such as the state feedback design for pole placement, the state observer design, and the low order single and multi-functional observer design, have been solved by using canonical form systems matrices. In this dissertation, a set of algorithms for solving these three design problems is developed and analysed. These algorithms are based on Hessenberg form systems matrices which are numerically more reliable to compute than the canonical form systems matrices.

  9. Implicit extrapolation methods for multilevel finite element computations

    SciTech Connect

    Jung, M.; Ruede, U.

    1994-12-31

    The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.

  10. Theoretical aspects of light-element alloys under extremely high pressure

    NASA Astrophysics Data System (ADS)

    Feng, Ji

    In this Dissertation, we present theoretical studies on the geometric and electronic structure of light-element alloys under high pressure. The first three Chapters are concerned with specific compounds, namely, SiH 4, CaLi2 and BexLi1- x, and associated structural and electronic phenomena, arising in our computational studies. In the fourth Chapter, we attempt to develop a unified view of the relationship between the electronic and geometric structure of light-element alloys under pressure, by focusing on the states near the Fermi level in these metals.

  11. Some Aspects of the Symbolic Manipulation of Computer Descriptions

    DTIC Science & Technology

    1974-07-01

    Given a desired macnme described .r terms of some specification language, and g^ven a space of machines def.ned by a class of Register Transfer...ISP, design it in terms of Foster Transfer level modules. Formally they may seem iden^cal. but the design spaces looi. quite d.nerent. 6) Mf... spacing is needed by the design automation system to produce a wiring list. Hence there is information contained in the computer description that is

  12. Acceleration of matrix element computations for precision measurements

    DOE PAGES

    Brandt, Oleg; Gutierrez, Gaston; Wang, M. H.L.S.; ...

    2014-11-25

    The matrix element technique provides a superior statistical sensitivity for precision measurements of important parameters at hadron colliders, such as the mass of the top quark or the cross-section for the production of Higgs bosons. The main practical limitation of the technique is its high computational demand. Using the example of the top quark mass, we present two approaches to reduce the computation time of the technique by a factor of 90. First, we utilize low-discrepancy sequences for numerical Monte Carlo integration in conjunction with a dedicated estimator of numerical uncertainty, a novelty in the context of the matrix elementmore » technique. We then utilize a new approach that factorizes the overall jet energy scale from the matrix element computation, a novelty in the context of top quark mass measurements. The utilization of low-discrepancy sequences is of particular general interest, as it is universally applicable to Monte Carlo integration, and independent of the computing environment.« less

  13. Boundary element analysis on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Kane, J. H.

    1994-01-01

    Boundary element analysis (BEA) can be characterized as a numerical technique that generally shifts the computational burden in the analysis toward numerical integration and the solution of nonsymmetric and either dense or blocked sparse systems of algebraic equations. Researchers have explored the concept that the fundamental characteristics of BEA can be exploited to generate effective implementations on vector and parallel computers. In this paper, the results of some of these investigations are discussed. The performance of overall algorithms for BEA on vector supercomputers, massively data parallel single instruction multiple data (SIMD), and relatively fine grained distributed memory multiple instruction multiple data (MIMD) computer systems is described. Some general trends and conclusions are discussed, along with indications of future developments that may prove fruitful in this regard.

  14. Compute Element and Interface Box for the Hazard Detection System

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.; Spaulding, Matthew D.; Luna, Michael E.; Motaghedi, Shui H.; Trawny, Nikolas; Johnson, Andrew E.; Ivanov, Tonislav I.; Huertas, Andres; Whitaker, William D.; Goldberg, Steven B.

    2013-01-01

    The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.

  15. ASPECT

    EPA Pesticide Factsheets

    Able to deploy within one hour of notification, EPA's Airborne Spectral Photometric Environmental Collection Technology (ASPECT) is the nation’s only airborne real-time chemical and radiological detection, infrared and photographic imagery platform.

  16. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  17. A Finite Element Method for Computation of Structural Intensity by the Normal Mode Approach

    NASA Astrophysics Data System (ADS)

    Gavrić, L.; Pavić, G.

    1993-06-01

    A method for numerical computation of structural intensity in thin-walled structures is presented. The method is based on structural finite elements (beam, plate and shell type) enabling computation of real eigenvalues and eigenvectors of the undamped structure which then serve in evaluation of complex response. The distributed structural damping is taken into account by using the modal damping concept, while any localized damping is treated as an external loading, determined by use of impedance matching conditions and eigenproperties of the structure. Emphasis is given to aspects of accuracy of the results and efficiency of the numerical procedures used. High requirements on accuracy of the structural response (displacements and stresses) needed in intensity applications are satisfied by employing the "swept static solution", which effectively takes into account the influence of higher modes otherwise inaccessible to numerical computation. A comparison is made between the results obtained by using analytical methods and the proposed numerical procedure to demonstrate the validity of the method presented.

  18. Computational and theoretical aspects of biomolecular structure and dynamics

    SciTech Connect

    Garcia, A.E.; Berendzen, J.; Catasti, P., Chen, X.

    1996-09-01

    This is the final report for a project that sought to evaluate and develop theoretical, and computational bases for designing, performing, and analyzing experimental studies in structural biology. Simulations of large biomolecular systems in solution, hydrophobic interactions, and quantum chemical calculations for large systems have been performed. We have developed a code that implements the Fast Multipole Algorithm (FMA) that scales linearly in the number of particles simulated in a large system. New methods have been developed for the analysis of multidimensional NMR data in order to obtain high resolution atomic structures. These methods have been applied to the study of DNA sequences in the human centromere, sequences linked to genetic diseases, and the dynamics and structure of myoglobin.

  19. Computational aspects of speed-dependent Voigt profiles

    NASA Astrophysics Data System (ADS)

    Schreier, Franz

    2017-01-01

    The increasing quality of atmospheric spectroscopy observations has indicated the limitations of the Voigt profile routinely used for line-by-line modeling, and physical processes beyond pressure and Doppler broadening have to be considered. The speed-dependent Voigt (SDV) profile can be readily computed as the difference of the real part of two complex error functions (i.e. Voigt functions). Using a highly accurate code as a reference, various implementations of the SDV function based on Humlíček's rational approximations are examined for typical speed dependences of pressure broadening and the range of wavenumber distances and Lorentz to Doppler width ratios encountered in infrared applications. Neither of these implementations appears to be optimal, and a new algorithm based on a combination of the Humlíček (1982) and Weideman (1994) rational approximations is suggested.

  20. Behavioral and computational aspects of language and its acquisition

    NASA Astrophysics Data System (ADS)

    Edelman, Shimon; Waterfall, Heidi

    2007-12-01

    One of the greatest challenges facing the cognitive sciences is to explain what it means to know a language, and how the knowledge of language is acquired. The dominant approach to this challenge within linguistics has been to seek an efficient characterization of the wealth of documented structural properties of language in terms of a compact generative grammar-ideally, the minimal necessary set of innate, universal, exception-less, highly abstract rules that jointly generate all and only the observed phenomena and are common to all human languages. We review developmental, behavioral, and computational evidence that seems to favor an alternative view of language, according to which linguistic structures are generated by a large, open set of constructions of varying degrees of abstraction and complexity, which embody both form and meaning and are acquired through socially situated experience in a given language community, by probabilistic learning algorithms that resemble those at work in other cognitive modalities.

  1. Computation of molecular electrostatics with boundary element methods.

    PubMed Central

    Liang, J; Subramaniam, S

    1997-01-01

    In continuum approaches to molecular electrostatics, the boundary element method (BEM) can provide accurate solutions to the Poisson-Boltzmann equation. However, the numerical aspects of this method pose significant problems. We describe our approach, applying an alpha shape-based method to generate a high-quality mesh, which represents the shape and topology of the molecule precisely. We also describe an analytical method for mapping points from the planar mesh to their exact locations on the surface of the molecule. We demonstrate that derivative boundary integral formulation has numerical advantages over the nonderivative formulation: the well-conditioned influence matrix can be maintained without deterioration of the condition number when the number of the mesh elements scales up. Singular integrand kernels are characteristics of the BEM. Their accurate integration is an important issue. We describe variable transformations that allow accurate numerical integration. The latter is the only plausible integral evaluation method when using curve-shaped boundary elements. Images FIGURE 3 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 PMID:9336178

  2. Human-Computer Interaction: A Review of the Research on Its Affective and Social Aspects.

    ERIC Educational Resources Information Center

    Deaudelin, Colette; Dussault, Marc; Brodeur, Monique

    2003-01-01

    Discusses a review of 34 qualitative and non-qualitative studies related to affective and social aspects of student-computer interactions. Highlights include the nature of the human-computer interaction (HCI); the interface, comparing graphic and text types; and the relation between variables linked to HCI, mainly trust, locus of control,…

  3. Incorporating Knowledge of Legal and Ethical Aspects into Computing Curricula of South African Universities

    ERIC Educational Resources Information Center

    Wayman, Ian; Kyobe, Michael

    2012-01-01

    As students in computing disciplines are introduced to modern information technologies, numerous unethical practices also escalate. With the increase in stringent legislations on use of IT, users of technology could easily be held liable for violation of this legislation. There is however lack of understanding of social aspects of computing, and…

  4. Massively parallel computation of RCS with finite elements

    NASA Technical Reports Server (NTRS)

    Parker, Jay

    1993-01-01

    One of the promising combinations of finite element approaches for scattering problems uses Whitney edge elements, spherical vector wave-absorbing boundary conditions, and bi-conjugate gradient solution for the frequency-domain near field. Each of these approaches may be criticized. Low-order elements require high mesh density, but also result in fast, reliable iterative convergence. Spherical wave-absorbing boundary conditions require additional space to be meshed beyond the most minimal near-space region, but result in fully sparse, symmetric matrices which keep storage and solution times low. Iterative solution is somewhat unpredictable and unfriendly to multiple right-hand sides, yet we find it to be uniformly fast on large problems to date, given the other two approaches. Implementation of these approaches on a distributed memory, message passing machine yields huge dividends, as full scalability to the largest machines appears assured and iterative solution times are well-behaved for large problems. We present times and solutions for computed RCS for a conducting cube and composite permeability/conducting sphere on the Intel ipsc860 with up to 16 processors solving over 200,000 unknowns. We estimate problems of approximately 10 million unknowns, encompassing 1000 cubic wavelengths, may be attempted on a currently available 512 processor machine, but would be exceedingly tedious to prepare. The most severe bottlenecks are due to the slow rate of mesh generation on non-parallel machines and the large transfer time from such a machine to the parallel processor. One solution, in progress, is to create and then distribute a coarse mesh among the processors, followed by systematic refinement within each processor. Elimination of redundant node definitions at the mesh-partition surfaces, snap-to-surface post processing of the resulting mesh for good modelling of curved surfaces, and load-balancing redistribution of new elements after the refinement are auxiliary

  5. FLASH: A finite element computer code for variably saturated flow

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  6. Computational study of protein secondary structure elements: Ramachandran plots revisited.

    PubMed

    Carrascoza, Francisco; Zaric, Snezana; Silaghi-Dumitrescu, Radu

    2014-05-01

    Potential energy surface (PES) were built for nineteen amino acids using density functional theory (PW91 and DFT M062X/6-311**). Examining the energy as a function of the φ/ψ dihedral angles in the allowed regions of the Ramachandran plot, amino acid groups that share common patterns on their PES plots and global minima were identified. These patterns show partial correlation with their structural and pharmacophoric features. Differences between these computational results and the experimentally noted permitted conformations of each amino acid are rationalized on the basis of attractive intra- and inter-molecular non-covalent interactions. The present data are focused on the intrinsic properties of an amino acid - an element which to our knowledge is typically ignored, as larger models are always used for the sake of similarity to real biological polypeptides.

  7. Impact of computer advances on future finite elements computations. [for aircraft and spacecraft design

    NASA Technical Reports Server (NTRS)

    Fulton, Robert E.

    1985-01-01

    Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

  8. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  9. A computer program for calculating aerodynamic characteristics of low aspect-ratio wings with partial leading-edge separation

    NASA Technical Reports Server (NTRS)

    Mehrotra, S. C.; Lan, C. E.

    1978-01-01

    The necessary information for using a computer program to predict distributed and total aerodynamic characteristics for low aspect ratio wings with partial leading-edge separation is presented. The flow is assumed to be steady and inviscid. The wing boundary condition is formulated by the Quasi-Vortex-Lattice method. The leading edge separated vortices are represented by discrete free vortex elements which are aligned with the local velocity vector at midpoints to satisfy the force free condition. The wake behind the trailing edge is also force free. The flow tangency boundary condition is satisfied on the wing, including the leading and trailing edges. The program is restricted to delta wings with zero thickness and no camber. It is written in FORTRAN language and runs on CDC 6600 computer.

  10. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  11. A finite element method for the computation of transonic flow past airfoils

    NASA Technical Reports Server (NTRS)

    Eberle, A.

    1980-01-01

    A finite element method for the computation of the transonic flow with shocks past airfoils is presented using the artificial viscosity concept for the local supersonic regime. Generally, the classic element types do not meet the accuracy requirements of advanced numerical aerodynamics requiring special attention to the choice of an appropriate element. A series of computed pressure distributions exhibits the usefulness of the method.

  12. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which

  13. Some Computational Aspects of the Brain Computer Interfaces Based on Inner Music

    PubMed Central

    Klonowski, Wlodzimierz; Duch, Wlodzisław; Perovic, Aleksandar; Jovanovic, Aleksandar

    2009-01-01

    We discuss the BCI based on inner tones and inner music. We had some success in the detection of inner tones, the imagined tones which are not sung aloud. Rather easily imagined and controlled, they offer a set of states usable for BCI, with high information capacity and high transfer rates. Imagination of sounds or musical tunes could provide a multicommand language for BCI, as if using the natural language. Moreover, this approach could be used to test musical abilities. Such BCI interface could be superior when there is a need for a broader command language. Some computational estimates and unresolved difficulties are presented. PMID:19503802

  14. C-arm cone-beam computed tomography in interventional oncology: technical aspects and clinical applications

    PubMed Central

    Floridi, Chiara; Radaelli, Alessandro; Abi-Jaoudeh, Nadine; Grass, Micheal; Lin, Ming De; Chiaradia, Melanie; Geschwind, Jean-Francois; Kobeiter, Hishman; Squillaci, Ettore; Maleux, Geert; Giovagnoni, Andrea; Brunese, Luca; Wood, Bradford; Carrafiello, Gianpaolo; Rotondo, Antonio

    2014-01-01

    C-arm cone-beam computed tomography (CBCT) is a new imaging technology integrated in modern angiographic systems. Due to its ability to obtain cross-sectional imaging and the possibility to use dedicated planning and navigation software, it provides an informed platform for interventional oncology procedures. In this paper, we highlight the technical aspects and clinical applications of CBCT imaging and navigation in the most common loco-regional oncological treatments. PMID:25012472

  15. Grouped element-by-element iteration schemes for incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Tezduyar, T. E.; Liou, J.

    1989-05-01

    Grouped element-by-element (GEBE) iteration schemes for incompressible flows are presented in the context of vorticity- stream function formulation. The GEBE procedure is a variation of the EBE procedure and is based on arrangement of the elements into groups with no inter-element coupling within each group. With the GEBE approach, vectorization and parallel implementation of the EBE method becomes more clear. The savings in storage and CPU time are demonstrated with two unsteady flow problems.

  16. A Limited Survey of General Purpose Finite Element Computer Programs

    NASA Technical Reports Server (NTRS)

    Glaser, J. C.

    1972-01-01

    Ten representative programs are compared. A listing of additional programs encountered during the course of this effort is also included. Tables are presented to show the structural analysis, material, load, and modeling element capability for the ten selected programs.

  17. 01010000 01001100 01000001 01011001: Play Elements in Computer Programming

    ERIC Educational Resources Information Center

    Breslin, Samantha

    2013-01-01

    This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…

  18. Some aspects of statistical distribution of trace element concentrations in biomedical samples

    NASA Astrophysics Data System (ADS)

    Majewska, U.; Braziewicz, J.; Banaś , D.; Kubala-Kukuś , A.; Góź Dź , S.; Pajek, M.; Zadrozsolarna, M.; Jaskóla, M.; Czyzsolarewski, T.

    1999-04-01

    Concentrations of trace elements in biomedical samples were studied using X-ray fluorescence (XRF), total reflection X-ray fluorescence (TRXRF) and particle-induced X-ray emission (PIXE) methods. Used analytical methods were compared in terms of their detection limits and applicability for studying the trace elements in large populations of biomedical samples. In a result, the XRF and TRXRF methods were selected to be used for the trace element concentration measurements in the urine and woman full-term placenta samples. The measured trace element concentration distributions were found to be strongly asymmetric and described by the logarithmic-normal distribution. Such a distribution is expected for the random sequential process, which realistically models a level of trace elements in studied biomedical samples. The importance and consequences of this finding are discussed, especially in the context of comparison of the concentration measurements in different populations of biomedical samples.

  19. Computational design of low aspect ratio wing-winglets for transonic wind-tunnel testing

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Brown, Christopher K.

    1989-01-01

    A computational design has been performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three planforms has been selected to be constructed as a wind tunnel model for testing in the NASA LaRC 7 x 10 High Speed Wind Tunnel. A design point of M = 0.8, CL approx = 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. This report summarizes the design process and the predicted transonic performance for each configuration.

  20. Computational design of low aspect ratio wing-winglet configurations for transonic wind-tunnel tests

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Brown, Christopher K.

    1988-01-01

    A computational design has been performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three planforms has been selected to be constructed as a wind tunnel model for testing in the NASA LaRC 7 x 10 High Speed Wind Tunnel. A design point of M = 0.8, CL approx = 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. This report summarizes the design process and the predicted transonic performance for each configuration.

  1. Development of K-Version of the Finite Element Method: A Robust Mathematical and Computational Procedure

    DTIC Science & Technology

    2006-02-01

    International Journal of Computational Methods for...Fluids, in review. "* V. Prabhakar and J. N. Reddy, "Orthogonality of Modal Bases," International Journal of Computational Methods for Fluids...Least-Squares Finite Element Model for Incompressible Navier-Stokes Equations," International Journal of Computational Methods for Fluids, in review.

  2. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  3. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  4. Finite Element Method for Thermal Analysis. [with computer program

    NASA Technical Reports Server (NTRS)

    Heuser, J.

    1973-01-01

    A two- and three-dimensional, finite-element thermal-analysis program which handles conduction with internal heat generation, convection, radiation, specified flux, and specified temperature boundary conditions is presented. Elements used in the program are the triangle and tetrahedron for two- and three-dimensional analysis, respectively. The theory used in the program is developed, and several sample problems demonstrating the capability and reliability of the program are presented. A guide to using the program, description of the input cards, and program listing are included.

  5. Acceleration of low order finite element computation with GPUs (Invited)

    NASA Astrophysics Data System (ADS)

    Knepley, M. G.

    2010-12-01

    Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the existing FEM software base employs low order methods. In this talk, we present a formulation of FEM, using the PETSc framework from ANL, which is amenable to GPU acceleration even at very low order. In addition, using the FEniCS system for FEM, we show that the relevant kernels can be automatically generated and optimized using a symbolic manipulation system.

  6. Computational aspects of zonal algorithms for solving the compressible Navier-Stokes equations in three dimensions

    NASA Technical Reports Server (NTRS)

    Holst, T. L.; Thomas, S. D.; Kaynak, U.; Gundy, K. L.; Flores, J.; Chaderjian, N. M.

    1985-01-01

    Transonic flow fields about wing geometries are computed using an Euler/Navier-Stokes approach in which the flow field is divided into several zones. The flow field immediately adjacent to the wing surface is resolved with fine grid zones and solved using a Navier-Stokes algorithm. Flow field regions removed from the wing are resolved with less finely clustered grid zones and are solved with an Euler algorithm. Computational issues associated with this zonal approach, including data base management aspects, are discussed. Solutions are obtained that are in good agreement with experiment, including cases with significant wind tunnel wall effects. Additional cases with significant shock induced separation on the upper wing surface are also presented.

  7. Nutritional Aspects of Essential Trace Elements in Oral Health and Disease: An Extensive Review

    PubMed Central

    Hussain, Mohsina

    2016-01-01

    Human body requires certain essential elements in small quantities and their absence or excess may result in severe malfunctioning of the body and even death in extreme cases because these essential trace elements directly influence the metabolic and physiologic processes of the organism. Rapid urbanization and economic development have resulted in drastic changes in diets with developing preference towards refined diet and nutritionally deprived junk food. Poor nutrition can lead to reduced immunity, augmented vulnerability to various oral and systemic diseases, impaired physical and mental growth, and reduced efficiency. Diet and nutrition affect oral health in a variety of ways with influence on craniofacial development and growth and maintenance of dental and oral soft tissues. Oral potentially malignant disorders (OPMD) are treated with antioxidants containing essential trace elements like selenium but even increased dietary intake of trace elements like copper could lead to oral submucous fibrosis. The deficiency or excess of other trace elements like iodine, iron, zinc, and so forth has a profound effect on the body and such conditions are often diagnosed through their early oral manifestations. This review appraises the biological functions of significant trace elements and their role in preservation of oral health and progression of various oral diseases. PMID:27433374

  8. Validation of the NESSUS probabilistic finite element analysis computer program

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Burnside, O. H.

    1988-01-01

    A computer program, NESSUS, is being developed as part of a NASA-sponsored project to develop probabilistic structural analysis methods for propulsion system components. This paper describes the process of validating the NESSUS code, as it has been developed to date, and presents numerical results comparing NESSUS and exact solutions for a set of selected problems.

  9. Cells on biomaterials--some aspects of elemental analysis by means of electron probes.

    PubMed

    Tylko, G

    2016-02-01

    Electron probe X-ray microanalysis enables concomitant observation of specimens and analysis of their elemental composition. The method is attractive for engineers developing tissue-compatible biomaterials. Either changes in element composition of cells or biomaterial can be defined according to well-established preparation and quantification procedures. However, the qualitative and quantitative elemental analysis appears more complicated when cells or thin tissue sections are deposited on biomaterials. X-ray spectra generated at the cell/tissue-biomaterial interface are modelled using a Monte Carlo simulation of a cell deposited on borosilicate glass. Enhanced electron backscattering from borosilicate glass was noted until the thickness of the biological layer deposited on the substrate reached 1.25 μm. It resulted in significant increase in X-ray intensities typical for the elements present in the cellular part. In this case, the mean atomic number value of the biomaterial determines the strength of this effect. When elements are present in the cells only, the positive linear relationship appears between X-ray intensities and cell thickness. Then, spatial dimensions of X-ray emission for the particular elements are exclusively in the range of the biological part and the intensities of X-rays become constant. When the elements are present in both the cell and the biomaterial, X-ray intensities are registered for the biological part and the substrate simultaneously leading to a negative linear relationship of X-ray intensities in the function of cell thickness. In the case of the analysis of an element typical for the biomaterial, strong decrease in X-ray emission is observed in the function of cell thickness as the effect of X-ray absorption and the limited excitation range to biological part rather than to the substrate. Correction procedures for calculations of element concentrations in thin films and coatings deposited on substrates are well established in

  10. Technical and clinical aspects of spectrometric analysis of trace elements in clinical samples.

    PubMed

    Chan, S; Gerson, B; Reitz, R E; Sadjadi, S A

    1998-12-01

    The capabilities of ICP-MS far exceed the slow, single-element analysis of GFAAS for determination of multiple trace elements. Additionally, its sensitivity is superior to that of DCP, ICP, and FAAS. The analytic procedure for ICP-MS is relatively straightforward and bypasses the need for digestion in many cases. It enables the physician to identify the target trace element(s) in intoxication cases, nutritional deficiency, or disease, thus eliminating the treatment delays experienced with sequential testing methods. This technology has its limitations as well. The ICP-MS cannot be used in the positive ion mode to analyze with sufficient sensitivity highly electronegative elements such as fluorine, because F+ is unstable and forms only by very high ionization energy. The ICP mass spectrometers used in most commercial laboratories utilize the quadrupole mass selector, which is limited by low resolution and, thus, by the various interferences previously discussed. For example, when an argon plasma is used, selenium (m/e 80) and chromium (m/e 52) in serum, plasma, and blood specimens are subject to polyatomic and molecular ion interferences. Low-resolution ICP mass spectrometers can therefore be used to analyze many trace elements, but they are not universal analyzers. High-resolution ICP-MS can resolve these interferences, but with greater expense. With the advent of more research and development of new techniques, some of these difficulties may be overcome, making this technique even more versatile. Contamination during sample collection and analysis causes falsely elevated results. Attention and care must be given to avoid contamination. Proper collection devices containing negligible amounts of trace elements should be used. Labware, preferably plastic and not glass, must be decontaminated prior to use by acid-washing and rinsed with [table: see text] de-ionized water. A complete description of sample collection and contamination has been written by Aitio and

  11. Formulation and computational aspects of plasticity and damage models with application to quasi-brittle materials

    SciTech Connect

    Chen, Z.; Schreyer, H.L.

    1995-09-01

    The response of underground structures and transportation facilities under various external loadings and environments is critical for human safety as well as environmental protection. Since quasi-brittle materials such as concrete and rock are commonly used for underground construction, the constitutive modeling of these engineering materials, including post-limit behaviors, is one of the most important aspects in safety assessment. From experimental, theoretical, and computational points of view, this report considers the constitutive modeling of quasi-brittle materials in general and concentrates on concrete in particular. Based on the internal variable theory of thermodynamics, the general formulations of plasticity and damage models are given to simulate two distinct modes of microstructural changes, inelastic flow and degradation of material strength and stiffness, that identify the phenomenological nonlinear behaviors of quasi-brittle materials. The computational aspects of plasticity and damage models are explored with respect to their effects on structural analyses. Specific constitutive models are then developed in a systematic manner according to the degree of completeness. A comprehensive literature survey is made to provide the up-to-date information on prediction of structural failures, which can serve as a reference for future research.

  12. Verification of a Non-Hydrostatic Dynamical Core Using Horizontally Spectral Element Vertically Finite Difference Method: 2D Aspects

    DTIC Science & Technology

    2014-04-01

    ranges of ′θ ∈ −1.51×10−3,2.78 ×10−3⎡⎣ ⎤⎦ from the model based on 351 the spectral element and discontinuous Galerkin method. Also Li et al. (2013...2008: A study of spectral element and discontinuous Galerkin 457 methods for the Navier-Stokes equations in nonhydrostatic mesoscale 458 atmospheric...of Computational Physics, 117, 35-46. 467 468 Kelly, J. F. and F. X. Giraldo, 2012: Continuous and discontinuous Galerkin methods for a 469

  13. Computational solution of acoustic radiation problems by Kussmaul's boundary element method

    NASA Astrophysics Data System (ADS)

    Kirkup, S. M.; Henwood, D. J.

    1992-10-01

    The problem of computing the properties of the acoustic field exterior to a vibrating surface for the complete wavenumber range by the boundary element method is considered. A particular computational method based on the Kussmaul formulation is described. The method is derived through approximating the surface by a set of planar triangles and approximating the surface functions by a constant on each element. The method is successfully applied to test problems and to the Ricardo crankcase simulation rig.

  14. Experience with automatic, dynamic load balancing and adaptive finite element computation

    SciTech Connect

    Wheat, S.R.; Devine, K.D.; Maccabe, A.B.

    1993-10-01

    Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.

  15. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  16. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    ERIC Educational Resources Information Center

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  17. Analytical Aspects of EPMA for Trace Element Analysis in Complex Accessory Minerals

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Williams, M. L.; Lane, E.

    2007-12-01

    High-resolution microanalysis of complex REE-bearing accessory phases is becoming increasingly necessary for insight into the chronology of phase growth and tectonic histories, and in understanding the mechanisms and manifestations of growth and dissolution reactions. The in-situ analysis of very small grains, inclusions, and sub-domains is revolutionizing our understanding of the evolution of complexly deformed, multiply metamorphosed, rocks. Great progress has been made in refining analytical protocols, and improvements in instrumentation have yielded unprecedented analytical precision and spatial resolution. As signal/noise improves, complexity is revealed, illustrating the level of care that must go into obtaining meaningful results, and in adopting an appropriate approach to minimize error. Background measurement is most critical for low concentration elements. Errors on net intensity values resulting from improper background measurement alone can exceed 50% relative. Regression and modeling of the background spectrum is essential, and must be carried out independently for each spectrometer, regardless of instrument. In complex materials such as REE- bearing phosphates, high concentrations of REEs and actinides create difficult analytical challenges as numerous emission lines and absorption edges cause great spectral complexity. In addition, trace concentrations of "unexpected" emission lines such as those from sulfur, or fluoresced from nearby phases (Ti, K), cause interferences on both measured peaks and background regions which can result in very large errors on target elements (U, Pb, etc.), on the order of 10s to 100s of ppm. Characteristic X-ray emission involving electron transitions from the valence shell are subject to measureable peak shifts, in some cases significantly affecting the accuracy of results if not accounted for. Geochronology by EPMA involves careful measurement of all constituent elements, with the calculated date dependant on the

  18. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    None

    2016-07-12

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided About the speaker Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  19. Computer modeling of batteries from non-linear circuit elements

    NASA Technical Reports Server (NTRS)

    Waaben, S.; Federico, J.; Moskowitz, I.

    1983-01-01

    A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.

  20. A computer program for anisotropic shallow-shell finite elements using symbolic integration

    NASA Technical Reports Server (NTRS)

    Andersen, C. M.; Bowen, J. T.

    1976-01-01

    A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.

  1. Computation of Schenberg response function by using finite element modelling

    NASA Astrophysics Data System (ADS)

    Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.

    2016-05-01

    Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.

  2. Aspects of bioanalytical method validation for the quantitative determination of trace elements.

    PubMed

    Levine, Keith E; Tudan, Christopher; Grohse, Peter M; Weber, Frank X; Levine, Michael A; Kim, Yu-Seon J

    2011-08-01

    Bioanalytical methods are used to quantitatively determine the concentration of drugs, biotransformation products or other specified substances in biological matrices and are often used to provide critical data to pharmacokinetic or bioequivalence studies in support of regulatory submissions. In order to ensure that bioanalytical methods are capable of generating reliable, reproducible data that meet or exceed current regulatory guidance, they are subjected to a rigorous method validation process. At present, regulatory guidance does not necessarily account for nuances specific to trace element determinations. This paper is intended to provide the reader with guidance related to trace element bioanalytical method validation from the authors' perspective for two prevalent and powerful instrumental techniques: inductively coupled plasma-optical emission spectrometry and inductively coupled plasma-MS.

  3. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Halogens, P, U and Na are reported in anorthositic and basaltic clasts and matrix from rusty rock 66095. Large fractions of Cl and Br associated with the separated phases from 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O- to 0.1 M HNO/sub 3/-soluble Br in the various components suggests no appreciable alteration in the original distributions of this element in the breccia forming processes. Up to 50% or more of the phosphorus and of the non-H/sub 2/O-soluble Cl was dissolved from most of the breccia components by 0.1 M HNO/sub 3/. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. Evidence that phosphates are the major P-phases in the brecia is based on the 0.1 M acid solubility of Cl and P in the matrix sample and on elemental concentrations which are consistent with those of KREEP.

  4. Aspects of the history of 66095 based on trace elements in clasts and whole rock

    SciTech Connect

    Jovanovic, S.; Reed, G.W. Jr.

    1981-01-01

    Large fractions of Cl and Br associated with separated anorthositic and basaltic clasts and matrix from rusty rock 66095 are soluble in H/sub 2/O. Up to two orders of magnitude variation in concentrations of these elements in the breccia components and varying H/sub 2/O-soluble Cl/Br ratios indicate different sources of volatiles. An approximately constant ratio of the H/sub 2/O to acid soluble Br, i.e. surface deposits vs possibly phosphate related Br, suggests no appreciable alteration in the original distributions of this element. Weak acid leaching dissolved approx. 50% or more of the phosphorus and of the remaining Cl from most of the breccia components. Clast and matrix residues from the leaching steps contain, in most cases, the Cl/P/sub 2/O/sub 5/ ratio found in 66095 whole rock and in a number of other Apollo 16 samples. No dependence on degree of brecciation is indicated. The clasts are typical of Apollo 16 rocks. Matrix leaching results and element concentrations suggest that apatite-whitlockite is a component of KREEP.

  5. Computational aspects in modelling the interaction of low-energy X-rays with liquid scintillators.

    PubMed

    Grau Carles, A; Grau Malonda, A

    2006-01-01

    The commercial liquid scintillators available nowadays are mostly complex cocktails that frequently include non-negligible amounts of heavier elements than the commonly expected carbon or hydrogen. In May 1993, nine laboratories agreed to participate in the frame of the EUROMET project in a comparison of the activity concentration measurement of 55Fe. One particular aspect of the results that caught one's eye was a small systematic difference between the activity concentrations obtained with Ultima Gold and Insta Gel. The detection of the radiation emitted by EC nuclides involves, in addition to the atomic rearrangement generated by the capture of the electron by the nucleus, a frequently ignored secondary atomic rearrangement process due to photoionization. Such a process can be neglected for scintillators that only contain hydrogen and carbon, e.g., toluene, but must be taken into account when the EC nuclide solution is incorporated to cocktails with heavier elements, e.g., Ultima Gold. All along the present year, an improved version of the program EMI has been developed. This code adds the photoionization reduced energy correction to the previous versions, and successfully explains the systematic difference between the measured activity concentrations of 55Fe in Ultima Gold and Insta Gel.

  6. Computation of vibration mode elastic-rigid and effective weight coefficients from finite-element computer program output

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1991-01-01

    Post-processing algorithms are given to compute the vibratory elastic-rigid coupling matrices and the modal contributions to the rigid-body mass matrices and to the effective modal inertias and masses. Recomputation of the elastic-rigid coupling matrices for a change in origin is also described. A computational example is included. The algorithms can all be executed by using standard finite-element program eigenvalue analysis output with no changes to existing code or source programs.

  7. Contours identification of elements in a cone beam computed tomography for investigating maxillary cysts

    NASA Astrophysics Data System (ADS)

    Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia

    2013-10-01

    Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.

  8. JCMmode: an adaptive finite element solver for the computation of leaky modes

    NASA Astrophysics Data System (ADS)

    Zschiedrich, Lin W.; Burger, Sven; Klose, Roland; Schaedle, Achim; Schmidt, Frank

    2005-03-01

    We present our simulation tool JCMmode for calculating propagating modes of an optical waveguide. As ansatz functions we use higher order, vectorial elements (Nedelec elements, edge elements). Further we construct transparent boundary conditions to deal with leaky modes even for problems with inhomogeneous exterior domains as for integrated hollow core Arrow waveguides. We have implemented an error estimator which steers the adaptive mesh refinement. This allows the precise computation of singularities near the metal's corner of a Plasmon-Polariton waveguide even for irregular shaped metal films on a standard personal computer.

  9. Navier-Stokes computations of vortical flows over low aspect ratio wings

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Taylor, S. L.; Anderson, W. K.

    1987-01-01

    An upwind-biased finite-volume algorithm is applied to the low-speed flow over a low aspect ratio delta wing from zero to forty degrees angle of attack. The differencing is second-order accurate spatially, and a multigrid algorithm is used to promote convergence to the steady state. The results compare well with the detailed experiments of Hummel (1983) and others for a Re(L) of 0.95 x 10 to the 6th. The predicted maximum lift coefficient of 1.10 at thirty-five degrees angle of attack agrees closely with the measured maximum lift of 1.06 at thirty-three degrees. At forty degrees angle of attack, a bubble type of vortex breakdown is evident in the computations, extending from 0.6 of the root chord to just downstream of the trailing edge.

  10. Multibody system dynamics for bio-inspired locomotion: from geometric structures to computational aspects.

    PubMed

    Boyer, Frédéric; Porez, Mathieu

    2015-03-26

    This article presents a set of generic tools for multibody system dynamics devoted to the study of bio-inspired locomotion in robotics. First, archetypal examples from the field of bio-inspired robot locomotion are presented to prepare the ground for further discussion. The general problem of locomotion is then stated. In considering this problem, we progressively draw a unified geometric picture of locomotion dynamics. For that purpose, we start from the model of discrete mobile multibody systems (MMSs) that we progressively extend to the case of continuous and finally soft systems. Beyond these theoretical aspects, we address the practical problem of the efficient computation of these models by proposing a Newton-Euler-based approach to efficient locomotion dynamics with a few illustrations of creeping, swimming, and flying.

  11. Modeling of Rolling Element Bearing Mechanics: Computer Program Updates

    NASA Technical Reports Server (NTRS)

    Ryan, S. G.

    1997-01-01

    The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.

  12. Computational design of low aspect ratio wing-winglet configurations for transonic wind-tunnel tests

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Brown, Christopher K.

    1989-01-01

    Computational designs were performed for three different low aspect ratio wing planforms fitted with nonplanar winglets; one of the three configurations was selected to be constructed as a wind tunnel model for testing in the NASA LaRC 8-foot transonic pressure tunnel. A design point of M = 0.8, C(sub L) is approximate or = to 0.3 was selected, for wings of aspect ratio equal to 2.2, and leading edge sweep angles of 45 deg and 50 deg. Winglet length is 15 percent of the wing semispan, with a cant angle of 15 deg, and a leading edge sweep of 50 deg. Winglet total area equals 2.25 percent of the wing reference area. The design process and the predicted transonic performance are summarized for each configuration. In addition, a companion low-speed design study was conducted, using one of the transonic design wing-winglet planforms but with different camber and thickness distributions. A low-speed wind tunnel model was constructed to match this low-speed design geometry, and force coefficient data were obtained for the model at speeds of 100 to 150 ft/sec. Measured drag coefficient reductions were of the same order of magnitude as those predicted by numerical subsonic performance predictions.

  13. Finite element solution techniques for large-scale problems in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1987-01-01

    Element-by-element approximate factorization, implicit-explicit and adaptive implicit-explicit approximation procedures are presented for the finite-element formulations of large-scale fluid dynamics problems. The element-by-element approximation scheme totally eliminates the need for formation, storage and inversion of large global matrices. Implicit-explicit schemes, which are approximations to implicit schemes, substantially reduce the computational burden associated with large global matrices. In the adaptive implicit-explicit scheme, the implicit elements are selected dynamically based on element level stability and accuracy considerations. This scheme provides implicit refinement where it is needed. The methods are applied to various problems governed by the convection-diffusion and incompressible Navier-Stokes equations. In all cases studied, the results obtained are indistinguishable from those obtained by the implicit formulations.

  14. Broadband aspects of a triple-patch antenna as an array element

    NASA Astrophysics Data System (ADS)

    Revankar, U. K.; Kumar, A.

    The design of radiating elements having wider bandwidths is an area of major interest in printed antenna technology. This paper describes a novel circular microstrip antenna adopting a three-layer stacked structure presenting a wider bandwidth as high as 20 percent with a low cross-polarization level and a high directive gain. Detailed experimental investigations are carried out on the effects of interlayer spacings and the thickness of the parasitic layers on the impedance bandwidth, 3-dB beamwidth and pattern shape.

  15. A new hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

  16. Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment

    PubMed Central

    Imai, Kazuhiro

    2015-01-01

    Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819

  17. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  18. Recurrent networks with recursive processing elements: paradigm for dynamical computing

    NASA Astrophysics Data System (ADS)

    Farhat, Nabil H.; del Moral Hernandez, Emilio

    1996-11-01

    It was shown earlier that models of cortical neurons can, under certain conditions of coherence in their input, behave as recursive processing elements (PEs) that are characterized by an iterative map on the phase interval and by bifurcation diagrams that demonstrate the complex encoding cortical neurons might be able to perform on their input. Here we present results of numerical experiments carried on a recurrent network of such recursive PEs modeled by the logistic map. Network behavior is studied under a novel scheme for generating complex spatio-temporal input patterns that could range from being coherent to partially coherent to being completely incoherent. A nontraditional nonlinear coupling scheme between neurons is employed to incorporate recent findings in brain science, namely that neurons use more than one kind of neurotransmitter in their chemical signaling. It is shown that such network shave the capacity to 'self-anneal' or collapse into period-m attractors that are uniquely related to the stimulus pattern following a transient 'chaotic' period during which the network searches it state-space for the associated dynamic attractor. The network accepts naturally both dynamical or stationary input patterns. Moreover we find that the use of quantized coupling strengths, introduced to reflect recent molecular biology and neurophysiological reports on synapse dynamics, endows the network with clustering ability wherein, depending ont eh stimulus pattern, PEs in the network with clustering ability wherein, depending on the stimulus pattern, PEs in the network divide into phase- locked groups with the PEs in each group being synchronized in period-m orbits. The value of m is found to be the same for all clusters and the number of clusters gives the dimension of the periodic attractor. The implications of these findings for higher-level processing such as feature- binding and for the development of novel learning algorithms are briefly discussed.

  19. Mixing characteristics of injector elements in liquid rocket engines - A computational study

    NASA Astrophysics Data System (ADS)

    Lohr, Jonathan C.; Trinh, Huu P.

    1992-07-01

    A computational study has been performed to better understand the mixing characteristics of liquid rocket injector elements. Variations in injector geometry as well as differences in injector element inlet flow conditions are among the areas examined in the study. Most results involve the nonreactive mixing of gaseous fuel with gaseous oxidizer but preliminary results are included that involve the spray combustion of oxidizer droplets. The purpose of the study is to numerically predict flowfield behavior in individual injector elements to a high degree of accuracy and in doing so to determine how various injector element properties affect the flow.

  20. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  1. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  2. Computational local stiffness analysis of biological cell: High aspect ratio single wall carbon nanotube tip.

    PubMed

    TermehYousefi, Amin; Bagheri, Samira; Shahnazar, Sheida; Rahman, Md Habibur; Kadri, Nahrizul Adib

    2016-02-01

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nanoscale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems, which is a powerful finite element (FE) tool to perform the numerical analysis and visualize the interactions between proposed tip and membrane of the cell. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney-Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well as the applied force of CNT-AFM tip on the contact area of the cell. This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis.

  3. Numerical Aspects of Nonhydrostatic Implementations Applied to a Parallel Finite Element Tsunami Model

    NASA Astrophysics Data System (ADS)

    Fuchs, A.; Androsov, A.; Harig, S.; Hiller, W.; Rakowsky, N.

    2012-04-01

    Based on the jeopardy of devastating tsunamis and the unpredictability of such events, tsunami modelling as part of warning systems is still a contemporary topic. The tsunami group of Alfred Wegener Institute developed the simulation tool TsunAWI as contribution to the Early Warning System in Indonesia. Although the precomputed scenarios for this purpose qualify for satisfying deliverables, the study of further improvements continues. While TsunAWI is governed by the Shallow Water Equations, an extension of the model is based on a nonhydrostatic approach. At the arrival of a tsunami wave in coastal regions with rough bathymetry, the term containing the nonhydrostatic part of pressure, that is neglected in the original hydrostatic model, gains in importance. In consideration of this term, a better approximation of the wave is expected. Differences of hydrostatic and nonhydrostatic model results are contrasted in the standard benchmark problem of a solitary wave runup on a plane beach. The observation data provided by Titov and Synolakis (1995) serves as reference. The nonhydrostatic approach implies a set of equations that are similar to the Shallow Water Equations, so the variation of the code can be implemented on top. However, this additional routines cause a lot of issues you have to cope with. So far the computations of the model were purely explicit. In the nonhydrostatic version the determination of an additional unknown and the solution of a large sparse system of linear equations is necessary. The latter constitutes the lion's share of computing time and memory requirement. Since the corresponding matrix is only symmetric in structure and not in values, an iterative Krylov Subspace Method is used, in particular the restarted Generalized Minimal Residual Algorithm GMRES(m). With regard to optimization, we present a comparison of several combinations of sequential and parallel preconditioning techniques respective number of iterations and setup

  4. Permeability computation on a REV with an immersed finite element method

    SciTech Connect

    Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

    2011-05-04

    An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

  5. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  6. A new finite element approach for prediction of aerothermal loads - Progress in inviscid flow computations

    NASA Technical Reports Server (NTRS)

    Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.

    1985-01-01

    Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commercially available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.

  7. A New Finite Element Approach for Prediction of Aerothermal Loads: Progress in Inviscid Flow Computations

    NASA Technical Reports Server (NTRS)

    Bey, K. S.; Thornton, E. A.; Dechaumphai, P.; Ramakrishnan, R.

    1985-01-01

    Recent progress in the development of finite element methodology for the prediction of aerothermal loads is described. Two dimensional, inviscid computations are presented, but emphasis is placed on development of an approach extendable to three dimensional viscous flows. Research progress is described for: (1) utilization of a commerically available program to construct flow solution domains and display computational results, (2) development of an explicit Taylor-Galerkin solution algorithm, (3) closed form evaluation of finite element matrices, (4) vector computer programming strategies, and (5) validation of solutions. Two test problems of interest to NASA Langley aerothermal research are studied. Comparisons of finite element solutions for Mach 6 flow with other solution methods and experimental data validate fundamental capabilities of the approach for analyzing high speed inviscid compressible flows.

  8. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  9. On finite element implementation and computational techniques for constitutive modeling of high temperature composites

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.

    1989-01-01

    The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.

  10. Determination of an Initial Mesh Density for Finite Element Computations via Data Mining

    SciTech Connect

    Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V

    2001-07-23

    Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.

  11. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  12. MARIAH: A finite-element computer program for incompressible porous flow problems. Theoretical background

    NASA Astrophysics Data System (ADS)

    Gartling, D. K.; Hickox, C. E.

    1982-10-01

    The theoretical background for the finite element computer program MARIAH is presented. The MARIAH code is designed for the analysis of incompressible fluid flow and heat transfer in saturated porous media. A description of the fluid/thermal boundary value problem treated by the program is presented and the finite element method and associated numerical methods used in MARIAH are discussed. Instructions for use of the program are documented in the Sandia National Laboratories report, SAND79-1623.

  13. Computational Modeling Approaches for Studying Transverse Combustion Instability in a Multi-element Injector

    DTIC Science & Technology

    2015-01-01

    Technical Paper 3. DATES COVERED (From - To) January 2015-May 2015 4. TITLE AND SUBTITLE COMPUTATIONAL MODELING APPROACHES FOR STUDYING TRANSVERSE ...so that the effect of the transverse instability on the center study element can be examined parametrically. The second approach models the entire...APPROACHES FOR STUDYING TRANSVERSE COMBUSTION INSTABILITY IN A MULTI-ELEMENT INJECTOR M.E. Harvazinski1, K.J. Shipley2*, D.G. Talley1, V. Sankaran1, and

  14. Computational Modeling Approaches for Studying Transverse Combustion Instability in a Multi-Element Injector (Briefing Charts)

    DTIC Science & Technology

    2015-05-01

    Charts 3. DATES COVERED (From - To) May 2015- June 2015 4. TITLE AND SUBTITLE COMPUTATIONAL MODELING APPROACHES FOR STUDYING TRANSVERSE COMBUSTION...an artificial forcing term. The forcing amplitude can be adjusted so that the effect of the transverse instability on the center study element can be...Approaches for Studying Transverse Combustion Instability in a Multi-element Injector Matt Harvazinski1, Kevin Shipley2, Doug Talley1, Venke Sankaran1

  15. THERM3D -- A boundary element computer program for transient heat conduction problems

    SciTech Connect

    Ingber, M.S.

    1994-02-01

    The computer code THERM3D implements the direct boundary element method (BEM) to solve transient heat conduction problems in arbitrary three-dimensional domains. This particular implementation of the BEM avoids performing time-consuming domain integrations by approximating a ``generalized forcing function`` in the interior of the domain with the use of radial basis functions. An approximate particular solution is then constructed, and the original problem is transformed into a sequence of Laplace problems. The code is capable of handling a large variety of boundary conditions including isothermal, specified flux, convection, radiation, and combined convection and radiation conditions. The computer code is benchmarked by comparisons with analytic and finite element results.

  16. A unified quadrature-based superconvergent finite element formulation for eigenvalue computation of wave equations

    NASA Astrophysics Data System (ADS)

    Wang, Dongdong; Li, Xiwei; Pan, Feixu

    2016-11-01

    A simple and unified finite element formulation is presented for superconvergent eigenvalue computation of wave equations ranging from 1D to 3D. In this framework, a general method based upon the so called α mass matrix formulation is first proposed to effectively construct 1D higher order mass matrices for arbitrary order elements. The finite elements discussed herein refer to the Lagrangian type of Lobatto elements that take the Lobatto points as nodes. Subsequently a set of quadrature rules that exactly integrate the 1D higher order mass matrices are rationally derived, which are termed as the superconvergent quadrature rules. More importantly, in 2D and 3D cases, it is found that the employment of these quadrature rules via tensor product simultaneously for the mass and stiffness matrix integrations of Lobatto elements produces a unified superconvergent formulation for the eigenvalue or frequency computation without wave propagation direction dependence, which usually is a critical issue for the multidimensional higher order mass matrix formulation. Consequently the proposed approach is capable of computing arbitrary frequencies in a superconvergent fashion. Meanwhile, numerical implementation of the proposed method for multidimensional problems is trivial. The effectiveness of the proposed methodology is systematically demonstrated by a series of numerical examples. Numerical results revealed that a superconvergence with 2(p+1)th order of frequency accuracy is achieved by the present unified formulation for the pth order Lobatto element.

  17. A unified quadrature-based superconvergent finite element formulation for eigenvalue computation of wave equations

    NASA Astrophysics Data System (ADS)

    Wang, Dongdong; Li, Xiwei; Pan, Feixu

    2017-01-01

    A simple and unified finite element formulation is presented for superconvergent eigenvalue computation of wave equations ranging from 1D to 3D. In this framework, a general method based upon the so called α mass matrix formulation is first proposed to effectively construct 1D higher order mass matrices for arbitrary order elements. The finite elements discussed herein refer to the Lagrangian type of Lobatto elements that take the Lobatto points as nodes. Subsequently a set of quadrature rules that exactly integrate the 1D higher order mass matrices are rationally derived, which are termed as the superconvergent quadrature rules. More importantly, in 2D and 3D cases, it is found that the employment of these quadrature rules via tensor product simultaneously for the mass and stiffness matrix integrations of Lobatto elements produces a unified superconvergent formulation for the eigenvalue or frequency computation without wave propagation direction dependence, which usually is a critical issue for the multidimensional higher order mass matrix formulation. Consequently the proposed approach is capable of computing arbitrary frequencies in a superconvergent fashion. Meanwhile, numerical implementation of the proposed method for multidimensional problems is trivial. The effectiveness of the proposed methodology is systematically demonstrated by a series of numerical examples. Numerical results revealed that a superconvergence with 2(p+1)th order of frequency accuracy is achieved by the present unified formulation for the pth order Lobatto element.

  18. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  19. Newly synthesized dihydroquinazoline derivative from the aspect of combined spectroscopic and computational study

    NASA Astrophysics Data System (ADS)

    El-Azab, Adel S.; Mary, Y. Sheena; Mary, Y. Shyma; Panicker, C. Yohannan; Abdel-Aziz, Alaa A.-M.; El-Sherbeny, Magda A.; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, Christian

    2017-04-01

    In this work, spectroscopic characterization of 2-(2-(4-oxo-3-phenethyl-3,4-dihydroquinazolin-2-ylthio)ethyl)isoindoline-1,3-dione have been obtained with experimentally and theoretically. Complete assignments of fundamental vibrations were performed on the basis of the potential energy distribution of the vibrational modes and good agreement between the experimental and scaled wavenumbers has been achieved. Frontier molecular orbitals have been used as indicators of stability and reactivity. Intramolecular interactions have been investigated by NBO analysis. The dipole moment, linear polarizability and first and second order hyperpolarizability values were also computed. In order to determine molecule sites prone to electrophilic attacks DFT calculations of average local ionization energy (ALIE) and Fukui functions have been performed as well. Intra-molecular non-covalent interactions have been determined and analyzed by the analysis of charge density. Stability of title molecule have also been investigated from the aspect of autoxidation, by calculations of bond dissociation energies (BDE), and hydrolysis, by calculations of radial distribution functions after molecular dynamics (MD) simulations. In order to assess the biological potential of the title compound a molecular docking study towards breast cancer type 2 complex has been performed.

  20. CAVASS: a computer assisted visualization and analysis software system - visualization aspects

    NASA Astrophysics Data System (ADS)

    Grevera, George; Udupa, Jayaram; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Iwanaga, Tad; Mishra, Shipra

    2007-03-01

    The Medical Image Processing Group (MIPG) at the University of Pennsylvania has been developing and distributing medical image analysis and visualization software systems for a long period of time. Our most recent system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing standards, and the development of open source toolkits. The development of CAVASS by our group is the next generation of 3DVIEWNIX. CAVASS will be freely available, open source, and is integrated with toolkits such as ITK and VTK. CAVASS runs on Windows, Unix, and Linux but shares a single code base. Rather than requiring expensive multiprocessor systems, it seamlessly provides for parallel processing via inexpensive COWs (Cluster of Workstations) for more time consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and analysis of medical imagery, so support for 3D and higher dimensional medical image data and the efficient implementation of algorithms is given paramount importance. This paper focuses on aspects of visualization. All of the most of the popular modes of visualization including various 2D slice modes, reslicing, MIP, surface rendering, volume rendering, and animation are incorporated into CAVASS.

  1. Learning the Lexical Aspects of a Second Language at Different Proficiencies: A Neural Computational Study

    ERIC Educational Resources Information Center

    Cuppini, Cristiano; Magosso, Elisa; Ursino, Mauro

    2013-01-01

    We present an original model designed to study how a second language (L2) is acquired in bilinguals at different proficiencies starting from an existing L1. The model assumes that the conceptual and lexical aspects of languages are stored separately: conceptual aspects in distinct topologically organized Feature Areas, and lexical aspects in a…

  2. A generalized hybrid transfinite element computational approach for nonlinear/linear unified thermal/structural analysis

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1987-01-01

    The present paper describes the development of a new hybrid computational approach for applicability for nonlinear/linear thermal structural analysis. The proposed transfinite element approach is a hybrid scheme as it combines the modeling versatility of contemporary finite elements in conjunction with transform methods and the classical Bubnov-Galerkin schemes. Applicability of the proposed formulations for nonlinear analysis is also developed. Several test cases are presented to include nonlinear/linear unified thermal-stress and thermal-stress wave propagations. Comparative results validate the fundamental capablities of the proposed hybrid transfinite element methodology.

  3. A Computational and Experimental Study of Nonlinear Aspects of Induced Drag

    NASA Technical Reports Server (NTRS)

    Smith, Stephen C.

    1996-01-01

    Despite the 80-year history of classical wing theory, considerable research has recently been directed toward planform and wake effects on induced drag. Nonlinear interactions between the trailing wake and the wing offer the possibility of reducing drag. The nonlinear effect of compressibility on induced drag characteristics may also influence wing design. This thesis deals with the prediction of these nonlinear aspects of induced drag and ways to exploit them. A potential benefit of only a few percent of the drag represents a large fuel savings for the world's commercial transport fleet. Computational methods must be applied carefully to obtain accurate induced drag predictions. Trefftz-plane drag integration is far more reliable than surface pressure integration, but is very sensitive to the accuracy of the force-free wake model. The practical use of Trefftz plane drag integration was extended to transonic flow with the Tranair full-potential code. The induced drag characteristics of a typical transport wing were studied with Tranair, a full-potential method, and A502, a high-order linear panel method to investigate changes in lift distribution and span efficiency due to compressibility. Modeling the force-free wake is a nonlinear problem, even when the flow governing equation is linear. A novel method was developed for computing the force-free wake shape. This hybrid wake-relaxation scheme couples the well-behaved nature of the discrete vortex wake with viscous-core modeling and the high-accuracy velocity prediction of the high-order panel method. The hybrid scheme produced converged wake shapes that allowed accurate Trefftz-plane integration. An unusual split-tip wing concept was studied for exploiting nonlinear wake interaction to reduced induced drag. This design exhibits significant nonlinear interactions between the wing and wake that produced a 12% reduction in induced drag compared to an equivalent elliptical wing at a lift coefficient of 0.7. The

  4. Computational analysis of enhanced magnetic bioseparation in microfluidic systems with flow-invasive magnetic elements.

    PubMed

    Khashan, S A; Alazzam, A; Furlani, E P

    2014-06-16

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer.

  5. [The current aspects of the computed tomographic and clinical diagnosis of degenerative changes in the spine of flight personnel].

    PubMed

    Vasil'ev, A Iu; Martynenko, M V; Martynenko, A V; Aleksakhina, T Iu

    1995-01-01

    The paper deals with the modern aspects of the computed tomographic and clinical diagnostics of the degenerative changes in the vertebral column of the pilots. There has been elaborated the classification of degenerative changes with consideration of the present-day requirements of clinical diagnostics and medical examination. The more frequently seen damages of intervertebral disks in the pilots in the form of protrusions and prolapses are indicated, the computed tomographic and clinical characteristics are compared.

  6. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision.

    PubMed

    Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G

    2017-03-09

    Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.

  7. Interactive computer graphic surface modeling of three-dimensional solid domains for boundary element analysis

    NASA Technical Reports Server (NTRS)

    Perucchio, R.; Ingraffea, A. R.

    1984-01-01

    The establishment of the boundary element method (BEM) as a valid tool for solving problems in structural mechanics and in other fields of applied physics is discussed. The development of an integrated interactive computer graphic system for the application of the BEM to three dimensional problems in elastostatics is described. The integration of interactive computer graphic techniques and the BEM takes place at the preprocessing and postprocessing stages of the analysis process, when, respectively, the data base is generated and the results are interpreted. The interactive computer graphic modeling techniques used for generating and discretizing the boundary surfaces of a solid domain are outlined.

  8. Hypermatrix scheme for finite element systems on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Voigt, S. J.

    1975-01-01

    A study is made of the adaptation of the hypermatrix (block matrix) scheme for solving large systems of finite element equations to the CDC STAR-100 computer. Discussion is focused on the organization of the hypermatrix computation using Cholesky decomposition and the mode of storage of the different submatrices to take advantage of the STAR pipeline (streaming) capability. Consideration is also given to the associated data handling problems and the means of balancing the I/Q and cpu times in the solution process. Numerical examples are presented showing anticipated gain in cpu speed over the CDC 6600 to be obtained by using the proposed algorithms on the STAR computer.

  9. [Computer simulation of the isolated lesion of tibiofibular an syndesmosis using the finite element method].

    PubMed

    Kozień, Marek S; Lorkowski, Jacek; Szczurek, Sławomir; Hładki, Waldemar; Trybus, Marek

    2008-01-01

    The aim of this study was to construct a computed simulation of an isolated lesion of tibiofibular syndesmosis on typical clinical range of value. The analysis was made using the method of finite elements with a simplified plain model of a bone and assuming material of bone and ankle joint as isotropic and homogeneous. The distraction processes were modelled by external generalized forces. The computed programme ANSYS was used. For evaluation obtained was the computed image of changes of anatomy in relation to forces.

  10. COYOTE: a finite-element computer program for nonlinear heat-conduction problems

    SciTech Connect

    Gartling, D.K.

    1982-10-01

    COYOTE is a finite element computer program designed for the solution of two-dimensional, nonlinear heat conduction problems. The theoretical and mathematical basis used to develop the code is described. Program capabilities and complete user instructions are presented. Several example problems are described in detail to demonstrate the use of the program.

  11. Finite element simulation of the mechanical impact of computer work on the carpal tunnel syndrome.

    PubMed

    Mouzakis, Dionysios E; Rachiotis, George; Zaoutsos, Stefanos; Eleftheriou, Andreas; Malizos, Konstantinos N

    2014-09-22

    Carpal tunnel syndrome (CTS) is a clinical disorder resulting from the compression of the median nerve. The available evidence regarding the association between computer use and CTS is controversial. There is some evidence that computer mouse or keyboard work, or both are associated with the development of CTS. Despite the availability of pressure measurements in the carpal tunnel during computer work (exposure to keyboard or mouse) there are no available data to support a direct effect of the increased intracarpal canal pressure on the median nerve. This study presents an attempt to simulate the direct effects of computer work on the whole carpal area section using finite element analysis. A finite element mesh was produced from computerized tomography scans of the carpal area, involving all tissues present in the carpal tunnel. Two loading scenarios were applied on these models based on biomechanical data measured during computer work. It was found that mouse work can produce large deformation fields on the median nerve region. Also, the high stressing effect of the carpal ligament was verified. Keyboard work produced considerable and heterogeneous elongations along the longitudinal axis of the median nerve. Our study provides evidence that increased intracarpal canal pressures caused by awkward wrist postures imposed during computer work were associated directly with deformation of the median nerve. Despite the limitations of the present study the findings could be considered as a contribution to the understanding of the development of CTS due to exposure to computer work.

  12. Computational fluid flow in two dimensions using simple T4/C3 element

    NASA Astrophysics Data System (ADS)

    Jan, Y. J.; Huang, S. J.; Lee, T. Y.

    2000-10-01

    The application of the four nodes for velocity and three nodes for pressure (T4/C3) element discretization technique for simulating two-dimensional steady and transitional flows is presented. The newly developed code has been validated by the application to three benchmark test cases: driven cavity flow, flow over a backward-facing step, and confined surface rib flow. In addition, a transitional flow with vortex shedding has been studied. The numerical results have shown excellent agreement with experimental results, as well as with those of other simulations. It should be pointed out that the advantages of the T4/C3 finite element over other higher-order elements lie in its computational simplicity, efficiency, and less computer memory requirement. Copyright

  13. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  14. STARS: An integrated general-purpose finite element structural, aeroelastic, and aeroservoelastic analysis computer program

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.

    1991-01-01

    The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.

  15. Experimental and Computational Investigation of Lift-Enhancing Tabs on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1996-01-01

    An experimental and computational investigation of the effect of lift-enhancing tabs on a two-element airfoil has been conducted. The objective of the study was to develop an understanding of the flow physics associated with lift-enhancing tabs on a multi-element airfoil. An NACA 63(2)-215 ModB airfoil with a 30% chord fowler flap was tested in the NASA Ames 7- by 10-Foot Wind Tunnel. Lift-enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. A combination of tabs located at the main element and flap trailing edges increased the airfoil lift coefficient by 11% relative to the highest lift coefficient achieved by any baseline configuration at an angle of attack of 0 deg, and C(sub 1max) was increased by 3%. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computed results predicted all of the trends observed in the experimental data quite well. In addition, a simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift-enhancing tabs work. The tabs were modeled by a point vortex at the air-foil or flap trailing edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift-enhancing tabs on a multi-element airfoil. Results of the modeling effort indicate that the dominant effects of the tabs on the pressure distribution of each element of the airfoil can be captured with a potential flow model for cases with no flow separation.

  16. Dental application of novel finite element analysis software for three-dimensional finite element modeling of a dentulous mandible from its computed tomography images.

    PubMed

    Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich

    2013-12-01

    This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.

  17. Applications of Parallel Computation in Micro-Mechanics and Finite Element Method

    NASA Technical Reports Server (NTRS)

    Tan, Hui-Qian

    1996-01-01

    This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.

  18. MAPVAR - A Computer Program to Transfer Solution Data Between Finite Element Meshes

    SciTech Connect

    Wellman, G.W.

    1999-03-01

    MAPVAR, as was the case with its precursor programs, MERLIN and MERLIN II, is designed to transfer solution results from one finite element mesh to another. MAPVAR draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR are described. User instructions are presented. Example problems are included to demonstrate the operation of the code and the effects of various input options.

  19. Special purpose hybrid transfinite elements and unified computational methodology for accurately predicting thermoelastic stress waves

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Railkar, Sudhir B.

    1988-01-01

    This paper represents an attempt to apply extensions of a hybrid transfinite element computational approach for accurately predicting thermoelastic stress waves. The applicability of the present formulations for capturing the thermal stress waves induced by boundary heating for the well known Danilovskaya problems is demonstrated. A unique feature of the proposed formulations for applicability to the Danilovskaya problem of thermal stress waves in elastic solids lies in the hybrid nature of the unified formulations and the development of special purpose transfinite elements in conjunction with the classical Galerkin techniques and transformation concepts. Numerical test cases validate the applicability and superior capability to capture the thermal stress waves induced due to boundary heating.

  20. Level set discrete element method for three-dimensional computations with triaxial case study

    NASA Astrophysics Data System (ADS)

    Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.

    2016-06-01

    In this paper, we outline the level set discrete element method (LS-DEM) which is a discrete element method variant able to simulate systems of particles with arbitrary shape using level set functions as a geometric basis. This unique formulation allows seamless interfacing with level set-based characterization methods as well as computational ease in contact calculations. We then apply LS-DEM to simulate two virtual triaxial specimens generated from XRCT images of experiments and demonstrate LS-DEM's ability to quantitatively capture and predict stress-strain and volume-strain behavior observed in the experiments.

  1. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.

  2. Report of a Workshop on the Pedagogical Aspects of Computational Thinking

    ERIC Educational Resources Information Center

    National Academies Press, 2011

    2011-01-01

    In 2008, the Computer and Information Science and Engineering Directorate of the National Science Foundation asked the National Research Council (NRC) to conduct two workshops to explore the nature of computational thinking and its cognitive and educational implications. The first workshop focused on the scope and nature of computational thinking…

  3. Program design by a multidisciplinary team. [for structural finite element analysis on STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Voigt, S.

    1975-01-01

    The use of software engineering aids in the design of a structural finite-element analysis computer program for the STAR-100 computer is described. Nested functional diagrams to aid in communication among design team members were used, and a standardized specification format to describe modules designed by various members was adopted. This is a report of current work in which use of the functional diagrams provided continuity and helped resolve some of the problems arising in this long-running part-time project.

  4. Finite element analysis and computer graphics visualization of flow around pitching and plunging airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.

    1973-01-01

    A general computational method for analyzing unsteady flow around pitching and plunging airfoils was developed. The finite element method was applied in developing an efficient numerical procedure for the solution of equations describing the flow around airfoils. The numerical results were employed in conjunction with computer graphics techniques to produce visualization of the flow. The investigation involved mathematical model studies of flow in two phases: (1) analysis of a potential flow formulation and (2) analysis of an incompressible, unsteady, viscous flow from Navier-Stokes equations.

  5. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.

  6. Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods

    NASA Astrophysics Data System (ADS)

    Bause, M.; Knabner, P.

    2004-06-01

    We present adaptive mixed hybrid finite element discretizations of the Richards equation, a nonlinear parabolic partial differential equation modeling the flow of water into a variably saturated porous medium. The approach simultaneously constructs approximations of the flux and the pressure head in Raviart-Thomas spaces. The resulting nonlinear systems of equations are solved by a Newton method. For the linear problems of the Newton iteration a multigrid algorithm is used. We consider two different kinds of error indicators for space adaptive grid refinement: superconvergence and residual based indicators. They can be calculated easily by means of the available finite element approximations. This seems attractive for computations since no additional (sub-)problems have to be solved. Computational experiments conducted for realistic water table recharge problems illustrate the effectiveness and robustness of the approach.

  7. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  8. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  9. A Computational Framework to Model Degradation of Biocorrodible Metal Stents Using an Implicit Finite Element Solver.

    PubMed

    Debusschere, Nic; Segers, Patrick; Dubruel, Peter; Verhegghe, Benedict; De Beule, Matthieu

    2016-02-01

    Bioresorbable stents represent an emerging technological development within the field of cardiovascular angioplasty. Their temporary presence avoids long-term side effects of non-degradable stents such as in-stent restenosis, late stent thrombosis and fatigue induced strut fracture. Several numerical modelling strategies have been proposed to evaluate the transitional mechanical characteristics of biodegradable stents using a continuum damage framework. However, these methods rely on an explicit finite-element integration scheme which, in combination with the quasi-static nature of many simulations involving stents and the small element size needed to model corrosion mechanisms, results in a high computational cost. To reduce the simulation times and to expand the general applicability of these degradation models, this paper investigates an implicit finite element solution method to model degradation of biodegradable stents.

  10. Breast cancer detection using neutron stimulated emission computed tomography: Prominent elements and dose requirements

    SciTech Connect

    Bender, Janelle E.; Kapadia, Anuj J.; Sharma, Amy C.; Tourassi, Georgia D.; Harrawood, Brian P.; Floyd, Carey E. Jr.

    2007-10-15

    Neutron stimulated emission computed tomography (NSECT) is being developed to noninvasively determine concentrations of trace elements in biological tissue. Studies have shown prominent differences in the trace element concentration of normal and malignant breast tissue. NSECT has the potential to detect these differences and diagnose malignancy with high accuracy with dose comparable to that of a single mammogram. In this study, NSECT imaging was simulated for normal and malignant human breast tissue samples to determine the significance of individual elements in determining malignancy. The normal and malignant models were designed with different elemental compositions, and each was scanned spectroscopically using a simulated 2.5 MeV neutron beam. The number of incident neutrons was varied from 0.5 million to 10 million neutrons. The resulting gamma spectra were evaluated through receiver operating characteristic (ROC) analysis to determine which trace elements were prominent enough to be considered markers for breast cancer detection. Four elemental isotopes ({sup 133}Cs, {sup 81}Br, {sup 79}Br, and {sup 87}Rb) at five energy levels were shown to be promising features for breast cancer detection with an area under the ROC curve (A{sub Z}) above 0.85. One of these elements - {sup 87}Rb at 1338 keV - achieved perfect classification at 10 million incident neutrons and could be detected with as low as 3 million incident neutrons. Patient dose was calculated for each gamma spectrum obtained and was found to range from between 0.05 and 0.112 mSv depending on the number of neutrons. This simulation demonstrates that NSECT has the potential to noninvasively detect breast cancer through five prominent trace element energy levels, at dose levels comparable to other breast cancer screening techniques.

  11. [Study of the influence of cellular phones and personal computers on schoolchildren's health: hygienic aspects].

    PubMed

    Chernenkov, Iu V; Gumeniuk, O I

    2009-01-01

    The paper presents the results of studying the impact of using cellular phones and personal computers on the health status of 277 Saratov schoolchildren (mean age 13.2 +/- 2.3 years). About 80% of the adolescents have been ascertained to use cellular phones and computers mainly for game purposes. The active users of cellular phones and computers show a high aggressiveness, anxiety, hostility, and social stress, low stress resistance, and susceptibility to arterial hypotension. The negative influence of cellular phones and computers on the schoolchildren's health increases with the increased duration and frequency of their use.

  12. Patient specific finite element model of the face soft tissues for computer-assisted maxillofacial surgery.

    PubMed

    Chabanas, Matthieu; Luboz, Vincent; Payan, Yohan

    2003-06-01

    This paper addresses the prediction of face soft tissue deformations resulting from bone repositioning in maxillofacial surgery. A generic 3D Finite Element model of the face soft tissues was developed. Face muscles are defined in the mesh as embedded structures, with different mechanical properties (transverse isotropy, stiffness depending on muscle contraction). Simulations of face deformations under muscle actions can thus be performed. In the context of maxillofacial surgery, this generic soft-tissue model is automatically conformed to patient morphology by elastic registration, using skin and skull surfaces segmented from a CT scan. Some elements of the patient mesh could be geometrically distorted during the registration, which disables Finite Element analysis. Irregular elements are thus detected and automatically regularized. This semi-automatic patient model generation is robust, fast and easy to use. Therefore it seems compatible with clinical use. Six patient models were successfully built, and simulations of soft tissue deformations resulting from bone displacements performed on two patient models. Both the adequation of the models to the patient morphologies and the simulations of post-operative aspects were qualitatively validated by five surgeons. Their conclusions are that the models fit the morphologies of the patients, and that the predicted soft tissue modifications are coherent with what they would expect.

  13. The computation of dispersion relations for axisymmetric waveguides using the Scaled Boundary Finite Element Method.

    PubMed

    Gravenkamp, Hauke; Birk, Carolin; Song, Chongmin

    2014-07-01

    This paper addresses the computation of dispersion curves and mode shapes of elastic guided waves in axisymmetric waveguides. The approach is based on a Scaled Boundary Finite Element formulation, that has previously been presented for plate structures and general three-dimensional waveguides with complex cross-section. The formulation leads to a Hamiltonian eigenvalue problem for the computation of wavenumbers and displacement amplitudes, that can be solved very efficiently. In the axisymmetric representation, only the radial direction in a cylindrical coordinate system has to be discretized, while the circumferential direction as well as the direction of propagation are described analytically. It is demonstrated, how the computational costs can drastically be reduced by employing spectral elements of extremely high order. Additionally, an alternative formulation is presented, that leads to real coefficient matrices. It is discussed, how these two approaches affect the computational efficiency, depending on the elasticity matrix. In the case of solid cylinders, the singularity of the governing equations that occurs in the center of the cross-section is avoided by changing the quadrature scheme. Numerical examples show the applicability of the approach to homogeneous as well as layered structures with isotropic or anisotropic material behavior.

  14. Computational micromechanical analysis of the representative volume element of bituminous composite materials

    NASA Astrophysics Data System (ADS)

    Ozer, Hasan; Ghauch, Ziad G.; Dhasmana, Heena; Al-Qadi, Imad L.

    2016-08-01

    Micromechanical computational modeling is used in this study to determine the smallest domain, or Representative Volume Element (RVE), that can be used to characterize the effective properties of composite materials such as Asphalt Concrete (AC). Computational Finite Element (FE) micromechanical modeling was coupled with digital image analysis of surface scans of AC specimens. Three mixtures with varying Nominal Maximum Aggregate Size (NMAS) of 4.75 mm, 12.5 mm, and 25 mm, were prepared for digital image analysis and computational micromechanical modeling. The effects of window size and phase modulus mismatch on the apparent viscoelastic response of the composite were numerically examined. A good agreement was observed in the RVE size predictions based on micromechanical computational modeling and image analysis. Micromechanical results indicated that a degradation in the matrix stiffness increases the corresponding RVE size. Statistical homogeneity was observed for window sizes equal to two to three times the NMAS. A model was presented for relating the degree of statistical homogeneity associated with each window size for materials with varying inclusion dimensions.

  15. Elements of computational fluid dynamics on block structured grids using implicit solvers

    NASA Astrophysics Data System (ADS)

    Badcock, K. J.; Richards, B. E.; Woodgate, M. A.

    2000-08-01

    This paper reviews computational fluid dynamics (CFD) for aerodynamic applications. The key elements of a rigorous CFD analysis are discussed. Modelling issues are summarised and the state of modern discretisation schemes considered. Implicit solution schemes are discussed in some detail, as is multiblock grid generation. The cost and availability of computing power is described in the context of cluster computing and its importance for CFD. Several complex applications are then considered in light of these simulation components. Verification and validation is presented for each application and the important flow mechanisms are shown through the use of the simulation results. The applications considered are: cavity flow, spiked body supersonic flow, underexpanded jet shock wave hysteresis, slender body aerodynamics and wing flutter. As a whole the paper aims to show the current strengths and limitations of CFD and the conclusions suggest a way of enhancing the usefulness of flow simulation for industrial class problems.

  16. Fiber pushout test: A three-dimensional finite element computational simulation

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Chamis, Christos C.

    1990-01-01

    A fiber pushthrough process was computationally simulated using three-dimensional finite element method. The interface material is replaced by an anisotropic material with greatly reduced shear modulus in order to simulate the fiber pushthrough process using a linear analysis. Such a procedure is easily implemented and is computationally very effective. It can be used to predict fiber pushthrough load for a composite system at any temperature. The average interface shear strength obtained from pushthrough load can easily be separated into its two components: one that comes from frictional stresses and the other that comes from chemical adhesion between fiber and the matrix and mechanical interlocking that develops due to shrinkage of the composite because of phase change during the processing. Step-by-step procedures are described to perform the computational simulation, to establish bounds on interfacial bond strength and to interpret interfacial bond quality.

  17. Analytical calculation of the lower bound on timing resolution for PET scintillation detectors comprising high-aspect-ratio crystal elements

    NASA Astrophysics Data System (ADS)

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-07-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3× 3× 20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162+/- 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%.

  18. Analytical Calculation of the Lower Bound on Timing Resolution for PET Scintillation Detectors Comprising High-Aspect-Ratio Crystal Elements

    PubMed Central

    Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.

    2015-01-01

    Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559

  19. Experimental and computational investigation of lift-enhancing tabs on a multi-element airfoil

    NASA Technical Reports Server (NTRS)

    Ashby, Dale

    1996-01-01

    An experimental and computational investigation of the effect of lift enhancing tabs on a two-element airfoil was conducted. The objective of the study was to develop an understanding of the flow physics associated with lift enhancing tabs on a multi-element airfoil. A NACA 63(sub 2)-215 ModB airfoil with a 30 percent chord Fowler flap was tested in the NASA Ames 7 by 10 foot wind tunnel. Lift enhancing tabs of various heights were tested on both the main element and the flap for a variety of flap riggings. Computations of the flow over the two-element airfoil were performed using the two-dimensional incompressible Navier-Stokes code INS2D-UP. The computer results predict all of the trends in the experimental data quite well. When the flow over the flap upper surface is attached, tabs mounted at the main element trailing edge (cove tabs) produce very little change in lift. At high flap deflections. however, the flow over the flap is separated and cove tabs produce large increases in lift and corresponding reductions in drag by eliminating the separated flow. Cove tabs permit high flap deflection angles to be achieved and reduce the sensitivity of the airfoil lift to the size of the flap gap. Tabs attached to the flap training edge (flap tabs) are effective at increasing lift without significantly increasing drag. A combination of a cove tab and a flap tab increased the airfoil lift coefficient by 11 percent relative to the highest lift tab coefficient achieved by any baseline configuration at an angle of attack of zero percent and the maximum lift coefficient was increased by more than 3 percent. A simple analytic model based on potential flow was developed to provide a more detailed understanding of how lift enhancing tabs work. The tabs were modeled by a point vortex at the training edge. Sensitivity relationships were derived which provide a mathematical basis for explaining the effects of lift enhancing tabs on a multi-element airfoil. Results of the modeling

  20. Computational Analysis of Enhanced Magnetic Bioseparation in Microfluidic Systems with Flow-Invasive Magnetic Elements

    PubMed Central

    Khashan, S. A.; Alazzam, A.; Furlani, E. P.

    2014-01-01

    A microfluidic design is proposed for realizing greatly enhanced separation of magnetically-labeled bioparticles using integrated soft-magnetic elements. The elements are fixed and intersect the carrier fluid (flow-invasive) with their length transverse to the flow. They are magnetized using a bias field to produce a particle capture force. Multiple stair-step elements are used to provide efficient capture throughout the entire flow channel. This is in contrast to conventional systems wherein the elements are integrated into the walls of the channel, which restricts efficient capture to limited regions of the channel due to the short range nature of the magnetic force. This severely limits the channel size and hence throughput. Flow-invasive elements overcome this limitation and enable microfluidic bioseparation systems with superior scalability. This enhanced functionality is quantified for the first time using a computational model that accounts for the dominant mechanisms of particle transport including fully-coupled particle-fluid momentum transfer. PMID:24931437

  1. Computations of Disturbance Amplification Behind Isolated Roughness Elements and Comparison with Measurements

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Li, Fei; Bynum, Michael; Kegerise, Michael; King, Rudolph

    2015-01-01

    Computations are performed to study laminar-turbulent transition due to isolated roughness elements in boundary layers at Mach 3.5 and 5.95, with an emphasis on flow configurations for which experimental measurements from low disturbance wind tunnels are available. The Mach 3.5 case corresponds to a roughness element with right-triangle planform with hypotenuse that is inclined at 45 degrees with respect to the oncoming stream, presenting an obstacle with spanwise asymmetry. The Mach 5.95 case corresponds to a circular roughness element along the nozzle wall of the Purdue BAMQT wind tunnel facility. In both cases, the mean flow distortion due to the roughness element is characterized by long-lived streamwise streaks in the roughness wake, which can support instability modes that did not exist in the absence of the roughness element. The linear amplification characteristics of the wake flow are examined towards the eventual goal of developing linear growth correlations for the onset of transition.

  2. The Space-Time CESE Method Applied to Viscous Flow Computations with High-Aspect Ratio Triangular or Tetrahedral Meshes

    NASA Astrophysics Data System (ADS)

    Chang, Chau-Lyan; Venkatachari, Balaji

    2016-11-01

    Flow physics near the viscous wall is intrinsically anisotropic in nature, namely, the gradient along the wall normal direction is much larger than that along the other two orthogonal directions parallel to the surface. Accordingly, high aspect ratio meshes are employed near the viscous wall to capture the physics and maintain low grid count. While such arrangement works fine for structured-grid based methods with dimensional splitting that handles derivatives in each direction separately, similar treatments often lead to numerical instability for unstructured-mesh based methods when triangular or tetrahedral meshes are used. The non-splitting treatment of near-wall gradients for high-aspect ratio triangular or tetrahedral elements results in an ill-conditioned linear system of equations that is closely related to the numerical instability. Altering the side lengths of the near wall tetrahedrons in the gradient calculations would make the system less unstable but more dissipative. This research presents recent progress in applying numerical dissipation control in the space-time conservation element solution element (CESE) method to reduce or alleviate the above-mentioned instability while maintaining reasonable solution accuracy.

  3. Computational Analysis of Some Aspects of a Synthetic Route to Ammonium Dinitramide

    DTIC Science & Technology

    1993-12-27

    OF SOME ASPECTS OF A SYNTHETIC ROUTE TO AMMONIUM DINITRAMIDE by Tore Brinck and Peter Politzer D T IC ELECTES JAN1 119943 Department of Chemistry JAN...NO0014-91-J-4057 a Synthetic Route to Ammonium Dinitramide Dr. Richard S. Miller 6. AUTHOR(S) Tore Brinck and Peter Politzer R&T Code 4131D02 7

  4. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 2

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    The Control/Structures Integration Program, a survey of available software for control of flexible structures, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software are discussed.

  5. MP Salsa: a finite element computer program for reacting flow problems. Part 1--theoretical development

    SciTech Connect

    Shadid, J.N.; Moffat, H.K.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Salinger, A.G.

    1996-05-01

    The theoretical background for the finite element computer program, MPSalsa, is presented in detail. MPSalsa is designed to solve laminar, low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow, heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solver coupled multiple Poisson or advection-diffusion- reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMKIN, respectively. The code employs unstructured meshes, using the EXODUS II finite element data base suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec solver library.

  6. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  7. Repetitive element signature-based visualization, distance computation, and classification of 1766 microbial genomes.

    PubMed

    Lee, Kang-Hoon; Shin, Kyung-Seop; Lim, Debora; Kim, Woo-Chan; Chung, Byung Chang; Han, Gyu-Bum; Roh, Jeongkyu; Cho, Dong-Ho; Cho, Kiho

    2015-07-01

    The genomes of living organisms are populated with pleomorphic repetitive elements (REs) of varying densities. Our hypothesis that genomic RE landscapes are species/strain/individual-specific was implemented into the Genome Signature Imaging system to visualize and compute the RE-based signatures of any genome. Following the occurrence profiling of 5-nucleotide REs/words, the information from top-50 frequency words was transformed into a genome-specific signature and visualized as Genome Signature Images (GSIs), using a CMYK scheme. An algorithm for computing distances among GSIs was formulated using the GSIs' variables (word identity, frequency, and frequency order). The utility of the GSI-distance computation system was demonstrated with control genomes. GSI-based computation of genome-relatedness among 1766 microbes (117 archaea and 1649 bacteria) identified their clustering patterns; although the majority paralleled the established classification, some did not. The Genome Signature Imaging system, with its visualization and distance computation functions, enables genome-scale evolutionary studies involving numerous genomes with varying sizes.

  8. Computing the Average Square: An Agent-Based Introduction to Aspects of Current Psychometric Practice

    ERIC Educational Resources Information Center

    Stroup, Walter M.; Hills, Thomas; Carmona, Guadalupe

    2011-01-01

    This paper summarizes an approach to helping future educators to engage with key issues related to the application of measurement-related statistics to learning and teaching, especially in the contexts of science, mathematics, technology and engineering (STEM) education. The approach we outline has two major elements. First, students are asked to…

  9. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  10. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  11. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation

    PubMed Central

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840

  12. Finite element analysis of the hip and spine based on quantitative computed tomography.

    PubMed

    Carpenter, R Dana

    2013-06-01

    Quantitative computed tomography (QCT) provides three-dimensional information about bone geometry and the spatial distribution of bone mineral. Images obtained with QCT can be used to create finite element models, which offer the ability to analyze bone strength and the distribution of mechanical stress and physical deformation. This approach can be used to investigate different mechanical loading scenarios (stance and fall configurations at the hip, for example) and to estimate whole bone strength and the relative mechanical contributions of the cortical and trabecular bone compartments. Finite element analyses based on QCT images of the hip and spine have been used to provide important insights into the biomechanical effects of factors such as age, sex, bone loss, pharmaceuticals, and mechanical loading at sites of high clinical importance. Thus, this analysis approach has become an important tool in the study of the etiology and treatment of osteoporosis at the hip and spine.

  13. Analysis of Uncertainty and Variability in Finite Element Computational Models for Biomedical Engineering: Characterization and Propagation.

    PubMed

    Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González

    2016-01-01

    Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.

  14. Computer modeling of single-cell and multicell thermionic fuel elements

    SciTech Connect

    Dickinson, J.W.; Klein, A.C.

    1996-05-01

    Modeling efforts are undertaken to perform coupled thermal-hydraulic and thermionic analysis for both single-cell and multicell thermionic fuel elements (TFE). The analysis--and the resulting MCTFE computer code (multicell thermionic fuel element)--is a steady-state finite volume model specifically designed to analyze cylindrical TFEs. It employs an interactive successive overrelaxation solution technique to solve for the temperatures throughout the TFE and a coupled thermionic routine to determine the total TFE performance. The calculated results include temperature distributions in all regions of the TFE, axial interelectrode voltages and current densities, and total TFE electrical output parameters including power, current, and voltage. MCTFE-generated results compare experimental data from the single-cell Topaz-II-type TFE and multicell data from the General Atomics 3H5 TFE to benchmark the accuracy of the code methods.

  15. Estimation of the physico-chemical parameters of materials based on rare earth elements with the application of computational model

    NASA Astrophysics Data System (ADS)

    Mamaev, K.; Obkhodsky, A.; Popov, A.

    2016-01-01

    Computational model, technique and the basic principles of operation program complex for quantum-chemical calculations of material's physico-chemical parameters with rare earth elements are discussed. The calculating system is scalable and includes CPU and GPU computational resources. Control and operation of computational jobs and also Globus Toolkit 5 software provides the possibility to join computer users in a unified system of data processing with peer-to-peer architecture. CUDA software is used to integrate graphic processors into calculation system.

  16. Computing ferrite core losses at high frequency by finite elements method including temperature influence

    SciTech Connect

    Ahmed, B.; Ahmad, J.; Guy, G.

    1994-09-01

    A finite elements method coupled with the Preisach model of hysteresis is used to compute-the ferrite losses in medium power transformers (10--60 kVA) working at relatively high frequencies (20--60 kHz) and with an excitation level of about 0.3 Tesla. The dynamic evolution of the permeability is taken into account. The simple and doubly cubic spline functions are used to account for temperature effects respectively on electric and on magnetic parameters of the ferrite cores. The results are compared with test data obtained with 3C8 and B50 ferrites at different frequencies.

  17. Symbolic algorithms for the computation of Moshinsky brackets and nuclear matrix elements

    NASA Astrophysics Data System (ADS)

    Ursescu, D.; Tomaselli, M.; Kuehl, T.; Fritzsche, S.

    2005-12-01

    To facilitate the use of the extended nuclear shell model (NSM), a FERMI module for calculating some of its basic quantities in the framework of MAPLE is provided. The Moshinsky brackets, the matrix elements for several central and non-central interactions between nuclear two-particle states as well as their expansion in terms of Talmi integrals are easily given within a symbolic formulation. All of these quantities are available for interactive work. Program summaryTitle of program:Fermi Catalogue identifier:ADVO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVO Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:None Computer for which the program is designed and others on which is has been tested:All computers with a licence for the computer algebra package MAPLE [Maple is a registered trademark of Waterloo Maple Inc., produced by MapleSoft division of Waterloo Maple Inc.] Instalations:GSI-Darmstadt; University of Kassel (Germany) Operating systems or monitors under which the program has beentested: WindowsXP, Linux 2.4 Programming language used:MAPLE 8 and 9.5 from MapleSoft division of Waterloo Maple Inc. Memory required to execute with typical data:30 MB No. of lines in distributed program including test data etc.:5742 No. of bytes in distributed program including test data etc.:288 939 Distribution program:tar.gz Nature of the physical problem:In order to perform calculations within the nuclear shell model (NSM), a quick and reliable access to the nuclear matrix elements is required. These matrix elements, which arise from various types of forces among the nucleons, can be calculated using Moshinsky's transformation brackets between relative and center-of-mass coordinates [T.A. Brody, M. Moshinsky, Tables of Transformation Brackets, Monografias del Instituto de Fisica, Universidad Nacional Autonoma de Mexico, 1960] and by the proper use of the nuclear states in different coupling notations

  18. Computing interaural differences through finite element modeling of idealized human heads.

    PubMed

    Cai, Tingli; Rakerd, Brad; Hartmann, William M

    2015-09-01

    Acoustical interaural differences were computed for a succession of idealized shapes approximating the human head-related anatomy: sphere, ellipsoid, and ellipsoid with neck and torso. Calculations were done as a function of frequency (100-2500 Hz) and for source azimuths from 10 to 90 degrees using finite element models. The computations were compared to free-field measurements made with a manikin. Compared to a spherical head, the ellipsoid produced greater large-scale variation with frequency in both interaural time differences and interaural level differences, resulting in better agreement with the measurements. Adding a torso, represented either as a large plate or as a rectangular box below the neck, further improved the agreement by adding smaller-scale frequency variation. The comparisons permitted conjectures about the relationship between details of interaural differences and gross features of the human anatomy, such as the height of the head, and length of the neck.

  19. Aspects of implementing constant traction boundary conditions in computational homogenization via semi-Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Javili, A.; Saeb, S.; Steinmann, P.

    2017-01-01

    In the past decades computational homogenization has proven to be a powerful strategy to compute the overall response of continua. Central to computational homogenization is the Hill-Mandel condition. The Hill-Mandel condition is fulfilled via imposing displacement boundary conditions (DBC), periodic boundary conditions (PBC) or traction boundary conditions (TBC) collectively referred to as canonical boundary conditions. While DBC and PBC are widely implemented, TBC remains poorly understood, with a few exceptions. The main issue with TBC is the singularity of the stiffness matrix due to rigid body motions. The objective of this manuscript is to propose a generic strategy to implement TBC in the context of computational homogenization at finite strains. To eliminate rigid body motions, we introduce the concept of semi-Dirichlet boundary conditions. Semi-Dirichlet boundary conditions are non-homogeneous Dirichlet-type constraints that simultaneously satisfy the Neumann-type conditions. A key feature of the proposed methodology is its applicability for both strain-driven as well as stress-driven homogenization. The performance of the proposed scheme is demonstrated via a series of numerical examples.

  20. Tying Theory To Practice: Cognitive Aspects of Computer Interaction in the Design Process.

    ERIC Educational Resources Information Center

    Mikovec, Amy E.; Dake, Dennis M.

    The new medium of computer-aided design requires changes to the creative problem-solving methodologies typically employed in the development of new visual designs. Most theoretical models of creative problem-solving suggest a linear progression from preparation and incubation to some type of evaluative study of the "inspiration." These…

  1. Single Element 2-DIMENSIONAL Acousto-Optic Deflectors Design, Fabrication and Implementation for Digital Optical Computing

    NASA Astrophysics Data System (ADS)

    Rosemeier, Jolanta Iwona

    1992-09-01

    With the need to develop very fast computers compared to the conventional digital chip based systems, the future is very bright for optical based signal processing. Attention has turned to a different application of optics utilizing mathematical operations, in which case operations are numerical, sometimes discrete, and often algebraic in nature. Interest has been so vigorous that many view it as a small revolution in optics whereby optical signal processing is beginning to encompass what many frequently describe as optical computing. The term is fully intended to imply close comparison with the operations performed by scientific digital computers. Most present computer intensive problem solving processors rely on a common set of linear equations found in numerical matrix algebra. Recently, considerable research focused on the use of systolic array, which can operate at high speeds with great efficiency. This approach addresses the acousto-optic digital and analog arrays utilizing three dimensional optical interconnect technology. In part I of this dissertation the first single element 2-dimensional (2-D) acousto-optic deflector was designed, fabricated and incorporated into an optical 3 x 3 vector-vector or matrix-matrix multiplier system. This single element deflector is used as a outer-product device. The input vectors are addressed by electronic means and the outer product matrix is displayed as a 2-D array of optical (laser) pixels. In part II of this work a multichannel single element 2-D deflector was designed, fabricated and implemented into a Programmable Logic Array (PLA) optical computing system. This system can be used for: word equality detection, free space optical interconnections, half adder optical system implementation. The PLA system described in this dissertation has capability of word equality detection. The 2-D multichannel deflector that was designed and fabricated is capable of comparing 16 x 16 words every 316 nanoseconds. Each word is 8

  2. 2nd International Symposium on Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering (REES-2015)

    NASA Astrophysics Data System (ADS)

    Tavadyan, Levon, Prof; Sachkov, Viktor, Prof; Godymchuk, Anna, Dr.; Bogdan, Anna

    2016-01-01

    The 2nd International Symposium «Fundamental Aspects of Rare-earth Elements Mining and Separation and Modern Materials Engineering» (REES2015) was jointly organized by Tomsk State University (Russia), National Academy of Science (Armenia), Shenyang Polytechnic University (China), Moscow Institute of Physics and Engineering (Russia), Siberian Physical-technical Institute (Russia), and Tomsk Polytechnic University (Russia) in September, 7-15, 2015, Belokuriha, Russia. The Symposium provided a high quality of presentations and gathered engineers, scientists, academicians, and young researchers working in the field of rare and rare earth elements mining, modification, separation, elaboration and application, in order to facilitate aggregation and sharing interests and results for a better collaboration and activity visibility. The goal of the REES2015 was to bring researchers and practitioners together to share the latest knowledge on rare and rare earth elements technologies. The Symposium was aimed at presenting new trends in rare and rare earth elements mining, research and separation and recent achievements in advanced materials elaboration and developments for different purposes, as well as strengthening the already existing contacts between manufactures, highly-qualified specialists and young scientists. The topics of the REES2015 were: (1) Problems of extraction and separation of rare and rare earth elements; (2) Methods and approaches to the separation and isolation of rare and rare earth elements with ultra-high purity; (3) Industrial technologies of production and separation of rare and rare earth elements; (4) Economic aspects in technology of rare and rare earth elements; and (5) Rare and rare earth based materials (application in metallurgy, catalysis, medicine, optoelectronics, etc.). We want to thank the Organizing Committee, the Universities and Sponsors supporting the Symposium, and everyone who contributed to the organization of the event and to

  3. A comparison of turbulence models in computing multi-element airfoil flows

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Menter, Florian; Durbin, Paul A.; Mansour, Nagi N.

    1994-01-01

    Four different turbulence models are used to compute the flow over a three-element airfoil configuration. These models are the one-equation Baldwin-Barth model, the one-equation Spalart-Allmaras model, a two-equation k-omega model, and a new one-equation Durbin-Mansour model. The flow is computed using the INS2D two-dimensional incompressible Navier-Stokes solver. An overset Chimera grid approach is utilized. Grid resolution tests are presented, and manual solution-adaptation of the grid was performed. The performance of each of the models is evaluated for test cases involving different angles-of-attack, Reynolds numbers, and flap riggings. The resulting surface pressure coefficients, skin friction, velocity profiles, and lift, drag, and moment coefficients are compared with experimental data. The models produce very similar results in most cases. Excellent agreement between computational and experimental surface pressures was observed, but only moderately good agreement was seen in the velocity profile data. In general, the difference between the predictions of the different models was less than the difference between the computational and experimental data.

  4. A comparative computational analysis of nonautonomous Helitron elements between maize and rice

    PubMed Central

    Sweredoski, Michael; DeRose-Wilson, Leah; Gaut, Brandon S

    2008-01-01

    Background Helitrons are DNA transposable elements that are proposed to replicate via a rolling circle mechanism. Non-autonomous helitron elements have captured gene fragments from many genes in maize (Zea mays ssp. mays) but only a handful of genes in Arabidopsis (Arabidopsis thaliana). This observation suggests very different histories for helitrons in these two species, but it is unclear which species contains helitrons that are more typical of plants. Results We performed computational searches to identify helitrons in maize and rice genomic sequence data. Using 12 previously identified helitrons as a seed set, we identified 23 helitrons in maize, five of which were polymorphic among a sample of inbred lines. Our total sample of maize helitrons contained fragments of 44 captured genes. Twenty-one of 35 of these helitrons did not cluster with other elements into closely related groups, suggesting substantial diversity in the maize element complement. We identified over 552 helitrons in the japonica rice genome. More than 70% of these were found in a collinear location in the indica rice genome, and 508 clustered as a single large subfamily. The japonica rice elements contained fragments of only 11 genes, a number similar to that in Arabidopsis. Given differences in gene capture between maize and rice, we examined sequence properties that could contribute to differences in capture rates, focusing on 3' palindromes that are hypothesized to play a role in transposition termination. The free energy of folding for maize helitrons were significantly lower than those in rice, but the direction of the difference differed from our prediction. Conclusion Maize helitrons are clearly unique relative to those of rice and Arabidopsis in the prevalence of gene capture, but the reasons for this difference remain elusive. Maize helitrons do not seem to be more polymorphic among individuals than those of Arabidopsis; they do not appear to be substantially older or younger than

  5. A computational approach to the dynamic aspects of primitive auditory scene analysis.

    PubMed

    Kashino, Makio; Adachi, Eisuke; Hirose, Haruto

    2013-01-01

    Recent psychophysical and physiological studies demonstrated that auditory scene analysis (ASA) is inherently a dynamic process, suggesting that the system conducting ASA constantly changes itself, incorporating the dynamics of sound sources in the acoustic scene, to realize efficient and robust information processing. Here, we propose computational models of ASA based on two computational principles of ASA, namely, separation in a feature space and temporal regularity. We explicitly introduced learning processes, so that the system could autonomously develop its selectivity to features or bases for analyses according to the observed acoustic data. Simulation results demonstrated that the models were able to predict some essential features of behavioral properties of ASA, such as the buildup of streaming, multistable perception, and the segregation of repeated patterns embedded in distracting sounds.

  6. Computational aspects of hot-wire identification of thermal conductivity and diffusivity under high temperature

    NASA Astrophysics Data System (ADS)

    Vala, Jiří; Jarošová, Petra

    2016-07-01

    Development of advanced materials resistant to high temperature, needed namely for the design of heat storage for low-energy and passive buildings, requires simple, inexpensive and reliable methods of identification of their temperature-sensitive thermal conductivity and diffusivity, covering both well-advised experimental setting and implementation of robust and effective computational algorithms. Special geometrical configurations offer a possibility of quasi-analytical evaluation of temperature development for direct problems, whereas inverse problems of simultaneous evaluation of thermal conductivity and diffusivity must be handled carefully, using some least-squares (minimum variance) arguments. This paper demonstrates the proper mathematical and computational approach to such model problem, thanks to the radial symmetry of hot-wire measurements, including its numerical implementation.

  7. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning.

    PubMed

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2016-01-01

    Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits.

  8. Linking Individual Learning Styles to Approach-Avoidance Motivational Traits and Computational Aspects of Reinforcement Learning

    PubMed Central

    Carl Aberg, Kristoffer; Doell, Kimberly C.; Schwartz, Sophie

    2016-01-01

    Learning how to gain rewards (approach learning) and avoid punishments (avoidance learning) is fundamental for everyday life. While individual differences in approach and avoidance learning styles have been related to genetics and aging, the contribution of personality factors, such as traits, remains undetermined. Moreover, little is known about the computational mechanisms mediating differences in learning styles. Here, we used a probabilistic selection task with positive and negative feedbacks, in combination with computational modelling, to show that individuals displaying better approach (vs. avoidance) learning scored higher on measures of approach (vs. avoidance) trait motivation, but, paradoxically, also displayed reduced learning speed following positive (vs. negative) outcomes. These data suggest that learning different types of information depend on associated reward values and internal motivational drives, possibly determined by personality traits. PMID:27851807

  9. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  10. Computational aspects of crack growth in sandwich plates from reinforced concrete and foam

    NASA Astrophysics Data System (ADS)

    Papakaliatakis, G.; Panoskaltsis, V. P.; Liontas, A.

    2012-12-01

    In this work we study the initiation and propagation of cracks in sandwich plates made from reinforced concrete in the boundaries and from a foam polymeric material in the core. A nonlinear finite element approach is followed. Concrete is modeled as an elastoplastic material with its tensile behavior and damage taken into account. Foam is modeled as a crushable, isotropic compressible material. We analyze slabs with a pre-existing macro crack at the position of the maximum bending moment and we study the macrocrack propagation, as well as the condition under which we have crack arrest.

  11. A computer lab exploring evolutionary aspects of chromatin structure and dynamics for an undergraduate chromatin course*.

    PubMed

    Eirín-López, José M

    2013-01-01

    The study of chromatin constitutes one of the most active research fields in life sciences, being subject to constant revisions that continuously redefine the state of the art in its knowledge. As every other rapidly changing field, chromatin biology requires clear and straightforward educational strategies able to efficiently translate such a vast body of knowledge to the classroom. With this aim, the present work describes a multidisciplinary computer lab designed to introduce undergraduate students to the dynamic nature of chromatin, within the context of the one semester course "Chromatin: Structure, Function and Evolution." This exercise is organized in three parts including (a) molecular evolutionary biology of histone families (using the H1 family as example), (b) histone structure and variation across different animal groups, and (c) effect of histone diversity on nucleosome structure and chromatin dynamics. By using freely available bioinformatic tools that can be run on common computers, the concept of chromatin dynamics is interactively illustrated from a comparative/evolutionary perspective. At the end of this computer lab, students are able to translate the bioinformatic information into a biochemical context in which the relevance of histone primary structure on chromatin dynamics is exposed. During the last 8 years this exercise has proven to be a powerful approach for teaching chromatin structure and dynamics, allowing students a higher degree of independence during the processes of learning and self-assessment.

  12. Computer-aided manufacturing for freeform optical elements by ultraprecision micromilling

    NASA Astrophysics Data System (ADS)

    Stoebenau, Sebastian; Kleindienst, Roman; Hofmann, Meike; Sinzinger, Stefan

    2011-09-01

    The successful fabrication of several freeform optical elements by ultraprecision micromilling is presented in this article. We discuss in detail the generation of the tool paths using different variations of a computer-aided manufacturing (CAM) process. Following a classical CAM approach, a reflective beam shaper was fabricated. The approach is based on a solid model calculated by optical design software. As no analytical description of the surface is needed, this procedure is the most general solution for the programming of the tool paths. A second approach is based on the same design data. But instead of a solid model, a higher order polynomial was fitted to the data using computational methods. Taking advantage of the direct programming capabilities of state-of-the-art computerized numerical control units, the mathematics to calculate the polynomial based tool paths on-the-fly during the machining process are implemented in a highly flexible CNC code. As another example for this programming method, the fabrication of a biconic lens from a closed analytical description directly derived from the optical design is shown. We provide details about the different programming methods and the fabrication processes as well as the results of characterizations concerning surface quality and shape accuracy of the freeform optical elements.

  13. The multi-element mercuric iodide detector array with computer controlled miniaturized electronics for EXAFS

    SciTech Connect

    Patt, B.E.; Iwanczyk, J.S.; Szczebiot, R.; Maculewicz, G.; Wang, M.; Wang, Y.J.; Hedman, B.; Hodgson, K.O.; Cox, A.D. |

    1995-08-01

    Construction of a 100-element HgI{sub 2} detector array, with miniaturized electronics, and software developed for synchrotron applications in the 5 keV to 35 keV region has been completed. Recently, extended x-ray absorption fine structure (EXAFS) data on dilute ({approximately} 1mM) metallo-protein samples were obtained with up to seventy-five elements of the system installed. The data quality obtained is excellent and shows that the detector is quite competitive as compared to commercially available systems. The system represents the largest detector array ever developed for high resolution, high count rate x-ray synchrotron applications. It also represents the first development and demonstration of high-density miniaturized spectroscopy electronics with this high level of performance. Lastly, the integration of the whole system into an automated computer-controlled environment represents a major advancement in the user interface for XAS measurements. These experiments clearly demonstrate that the HgI{sub 2} system, with the miniaturized electronics and associated computer control functions well. In addition it shows that the new system provides superior ease of use and functionality, and that data quality is as good as or better than with state-of-the-art cryogenically cooled Ge systems.

  14. Efficient Computation of Info-Gap Robustness for Finite Element Models

    SciTech Connect

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  15. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  16. Adaptive finite element simulation of flow and transport applications on parallel computers

    NASA Astrophysics Data System (ADS)

    Kirk, Benjamin Shelton

    The subject of this work is the adaptive finite element simulation of problems arising in flow and transport applications on parallel computers. Of particular interest are new contributions to adaptive mesh refinement (AMR) in this parallel high-performance context, including novel work on data structures, treatment of constraints in a parallel setting, generality and extensibility via object-oriented programming, and the design/implementation of a flexible software framework. This technology and software capability then enables more robust, reliable treatment of multiscale--multiphysics problems and specific studies of fine scale interaction such as those in biological chemotaxis (Chapter 4) and high-speed shock physics for compressible flows (Chapter 5). The work begins by presenting an overview of key concepts and data structures employed in AMR simulations. Of particular interest is how these concepts are applied in the physics-independent software framework which is developed here and is the basis for all the numerical simulations performed in this work. This open-source software framework has been adopted by a number of researchers in the U.S. and abroad for use in a wide range of applications. The dynamic nature of adaptive simulations pose particular issues for efficient implementation on distributed-memory parallel architectures. Communication cost, computational load balance, and memory requirements must all be considered when developing adaptive software for this class of machines. Specific extensions to the adaptive data structures to enable implementation on parallel computers is therefore considered in detail. The libMesh framework for performing adaptive finite element simulations on parallel computers is developed to provide a concrete implementation of the above ideas. This physics-independent framework is applied to two distinct flow and transport applications classes in the subsequent application studies to illustrate the flexibility of the

  17. Computer-Aided Drug Design (CADD): Methodological Aspects and Practical Applications in Cancer Research

    NASA Astrophysics Data System (ADS)

    Gianti, Eleonora

    Computer-Aided Drug Design (CADD) has deservedly gained increasing popularity in modern drug discovery (Schneider, G.; Fechner, U. 2005), whether applied to academic basic research or the pharmaceutical industry pipeline. In this work, after reviewing theoretical advancements in CADD, we integrated novel and stateof- the-art methods to assist in the design of small-molecule inhibitors of current cancer drug targets, specifically: Androgen Receptor (AR), a nuclear hormone receptor required for carcinogenesis of Prostate Cancer (PCa); Signal Transducer and Activator of Transcription 5 (STAT5), implicated in PCa progression; and Epstein-Barr Nuclear Antigen-1 (EBNA1), essential to the Epstein Barr Virus (EBV) during latent infections. Androgen Receptor. With the aim of generating binding mode hypotheses for a class (Handratta, V.D. et al. 2005) of dual AR/CYP17 inhibitors (CYP17 is a key enzyme for androgens biosynthesis and therefore implicated in PCa development), we successfully implemented a receptor-based computational strategy based on flexible receptor docking (Gianti, E.; Zauhar, R.J. 2012). Then, with the ultimate goal of identifying novel AR binders, we performed Virtual Screening (VS) by Fragment-Based Shape Signatures, an improved version of the original method developed in our Laboratory (Zauhar, R.J. et al. 2003), and we used the results to fully assess the high-level performance of this innovative tool in computational chemistry. STAT5. The SRC Homology 2 (SH2) domain of STAT5 is responsible for phospho-peptide recognition and activation. As a keystone of Structure-Based Drug Design (SBDD), we characterized key residues responsible for binding. We also generated a model of STAT5 receptor bound to a phospho-peptide ligand, which was validated by docking publicly known STAT5 inhibitors. Then, we performed Shape Signatures- and docking-based VS of the ZINC database (zinc.docking.org), followed by Molecular Mechanics Generalized Born Surface Area (MMGBSA

  18. Some computational aspects of the hals (harmonic analysis of x-ray line shape) method

    SciTech Connect

    Moshkina, T.I.; Nakhmanson, M.S.

    1986-02-01

    This paper discusses the problem of distinguishing the analytical line from the background and approximates the background component. One of the constituent parts of the program package in the procedural-mathematical software for x-ray investigations of polycrystalline substances in application to the DRON-3, DRON-2 and ADP-1 diffractometers is the SSF system of programs, which is designed for determining the parameters of the substructure of materials. The SSF system is tailored not only to Unified Series (ES) computers, but also to the M-6000 and SM-1 minicomputers.

  19. Physics and engineering aspects of cell and tissue imaging systems: microscopic devices and computer assisted diagnosis.

    PubMed

    Chen, Xiaodong; Ren, Liqiang; Zheng, Bin; Liu, Hong

    2013-01-01

    The conventional optical microscopes have been used widely in scientific research and in clinical practice. The modern digital microscopic devices combine the power of optical imaging and computerized analysis, archiving and communication techniques. It has a great potential in pathological examinations for improving the efficiency and accuracy of clinical diagnosis. This chapter reviews the basic optical principles of conventional microscopes, fluorescence microscopes and electron microscopes. The recent developments and future clinical applications of advanced digital microscopic imaging methods and computer assisted diagnosis schemes are also discussed.

  20. Precise Boundary Element Computation of Protein Transport Properties: Diffusion Tensors, Specific Volume, and Hydration

    PubMed Central

    Aragon, Sergio; Hahn, David K.

    2006-01-01

    A precise boundary element method for the computation of hydrodynamic properties has been applied to the study of a large suite of 41 soluble proteins ranging from 6.5 to 377 kDa in molecular mass. A hydrodynamic model consisting of a rigid protein excluded volume, obtained from crystallographic coordinates, surrounded by a uniform hydration thickness has been found to yield properties in excellent agreement with experiment. The hydration thickness was determined to be δ = 1.1 ± 0.1 Å. Using this value, standard deviations from experimental measurements are: 2% for the specific volume; 2% for the translational diffusion coefficient, and 6% for the rotational diffusion coefficient. These deviations are comparable to experimental errors in these properties. The precision of the boundary element method allows the unified description of all of these properties with a single hydration parameter, thus far not achieved with other methods. An approximate method for computing transport properties with a statistical precision of 1% or better (compared to 0.1–0.2% for the full computation) is also presented. We have also estimated the total amount of hydration water with a typical −9% deviation from experiment in the case of monomeric proteins. Both the water of hydration and the more precise translational diffusion data hint that some multimeric proteins may not have the same solution structure as that in the crystal because the deviations are systematic and larger than in the monomeric case. On the other hand, the data for monomeric proteins conclusively show that there is no difference in the protein structure going from the crystal into solution. PMID:16714342

  1. MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide

    SciTech Connect

    Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.

    1996-09-01

    This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.

  2. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  3. Biophysical and biochemical aspects of antifreeze proteins: Using computational tools to extract atomistic information.

    PubMed

    Kar, Rajiv K; Bhunia, Anirban

    2015-11-01

    Antifreeze proteins (AFPs) are the key biomolecules that protect species from extreme climatic conditions. Studies of AFPs, which are based on recognition of ice plane and structural motifs, have provided vital information that point towards the mechanism responsible for executing antifreeze activity. Importantly, the use of experimental techniques has revealed key information for AFPs, but the exact microscopic details are still not well understood, which limits the application and design of novel antifreeze agents. The present review focuses on the importance of computational tools for investigating (i) molecular properties, (ii) structure-function relationships, and (iii) AFP-ice interactions at atomistic levels. In this context, important details pertaining to the methodological approaches used in molecular dynamics studies of AFPs are also discussed. It is hoped that the information presented herein is helpful for enriching our knowledge of antifreeze properties, which can potentially pave the way for the successful design of novel antifreeze biomolecular agents.

  4. Geometric Aspects of Discretized Classical Field Theories: Extensions to Finite Element Exterior Calculus, Noether Theorems, and the Geodesic Finite Element Method

    NASA Astrophysics Data System (ADS)

    Salamon, Joe

    In this dissertation, I will discuss and explore the various theoretical pillars re- quired to investigate the world of discretized gauge theories in a purely classical setting, with the long-term aim of achieving a fully-fledged discretization of General Relativity (GR). I will start with a brief review of differential forms, then present some results on the geometric framework of finite element exterior calculus (FEEC); in particular, I will elaborate on integrating metric structures within the framework and categorize the dual spaces of the various spaces of polynomial differential forms P rLambdak(R n). After a brief pedagogical detour on Noether's two theorems, I will apply all of the above into discretizations of electromagnetism and linearized GR. I will conclude with an excursion into the geodesic finite element method (GFEM) as a way to generalize some of the above notions to curved manifolds.

  5. Computational aspects of the nonlinear normal mode initialization of the GLAS 4th order GCM

    NASA Technical Reports Server (NTRS)

    Navon, I. M.; Bloom, S. C.; Takacs, L.

    1984-01-01

    Using the normal modes of the GLAS 4th Order Model, a Machenhauer nonlinear normal mode initialization (NLNMI) was carried out for the external vertical mode using the GLAS 4th Order shallow water equations model for an equivalent depth corresponding to that associated with the external vertical mode. A simple procedure was devised which was directed at identifying computational modes by following the rate of increase of BAL sub M, the partial (with respect to the zonal wavenumber m) sum of squares of the time change of the normal mode coefficients (for fixed vertical mode index) varying over the latitude index L of symmetric or antisymmetric gravity waves. A working algorithm is presented which speeds up the convergence of the iterative Machenhauer NLNMI. A 24 h integration using the NLNMI state was carried out using both Matsuno and leap-frog time-integration schemes; these runs were then compared to a 24 h integration starting from a non-initialized state. The maximal impact of the nonlinear normal mode initialization was found to occur 6-10 hours after the initial time.

  6. Preprocessor and postprocessor computer programs for a radial-flow finite-element model

    USGS Publications Warehouse

    Pucci, A.A.; Pope, D.A.

    1987-01-01

    Preprocessing and postprocessing computer programs that enhance the utility of the U.S. Geological Survey radial-flow model have been developed. The preprocessor program: (1) generates a triangular finite element mesh from minimal data input, (2) produces graphical displays and tabulations of data for the mesh , and (3) prepares an input data file to use with the radial-flow model. The postprocessor program is a version of the radial-flow model, which was modified to (1) produce graphical output for simulation and field results, (2) generate a statistic for comparing the simulation results with observed data, and (3) allow hydrologic properties to vary in the simulated region. Examples of the use of the processor programs for a hypothetical aquifer test are presented. Instructions for the data files, format instructions, and a listing of the preprocessor and postprocessor source codes are given in the appendixes. (Author 's abstract)

  7. RELATIONSHIP BETWEEN RIGIDITY OF EXTERNAL FIXATOR AND NUMBER OF PINS: COMPUTER ANALYSIS USING FINITE ELEMENTS

    PubMed Central

    Sternick, Marcelo Back; Dallacosta, Darlan; Bento, Daniela Águida; do Reis, Marcelo Lemos

    2015-01-01

    Objective: To analyze the rigidity of a platform-type external fixator assembly, according to different numbers of pins on each clamp. Methods: Computer simulation on a large-sized Cromus dynamic external fixator (Baumer SA) was performed using a finite element method, in accordance with the standard ASTM F1541. The models were generated with approximately 450,000 quadratic tetrahedral elements. Assemblies with two, three and four Schanz pins of 5.5 mm in diameter in each clamp were compared. Every model was subjected to a maximum force of 200 N, divided into 10 sub-steps. For the components, the behavior of the material was assumed to be linear, elastic, isotropic and homogeneous. For each model, the rigidity of the assembly and the Von Mises stress distribution were evaluated. Results: The rigidity of the system was 307.6 N/mm for two pins, 369.0 N/mm for three and 437.9 N/mm for four. Conclusion: The results showed that four Schanz pins in each clamp promoted rigidity that was 19% greater than in the configuration with three pins and 42% greater than with two pins. Higher tension occurred in configurations with fewer pins. In the models analyzed, the maximum tension occurred on the surface of the pin, close to the fixation area. PMID:27047879

  8. Theory and computation of electromagnetic transition matrix elements in the continuous spectrum of atoms

    NASA Astrophysics Data System (ADS)

    Komninos, Yannis; Mercouris, Theodoros; Nicolaides, Cleanthes A.

    2017-01-01

    The present study examines the mathematical properties of the free-free ( f - f) matrix elements of the full electric field operator, O E (κ, r̅), of the multipolar Hamiltonian. κ is the photon wavenumber. Special methods are developed and applied for their computation, for the general case where the scattering wavefunctions are calculated numerically in the potential of the term-dependent ( N - 1) electron core, and are energy-normalized. It is found that, on the energy axis, the f - f matrix elements of O E (κ, r̅) have singularities of first order, i.e., as ɛ' → ɛ, they behave as ( ɛ - ɛ')-1. The numerical applications are for f - f transitions in hydrogen and neon, obeying electric dipole and quadrupole selection rules. In the limit κ = 0, O E (κ, r̅) reduces to the length form of the electric dipole approximation (EDA). It is found that the results for the EDA agree with those of O E (κ, r̅), with the exception of a wave-number region k' = k ± κ about the point k' = k.

  9. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  10. Predicting mouse vertebra strength with micro-computed tomography-derived finite element analysis.

    PubMed

    Nyman, Jeffry S; Uppuganti, Sasidhar; Makowski, Alexander J; Rowland, Barbara J; Merkel, Alyssa R; Sterling, Julie A; Bredbenner, Todd L; Perrien, Daniel S

    2015-01-01

    As in clinical studies, finite element analysis (FEA) developed from computed tomography (CT) images of bones are useful in pre-clinical rodent studies assessing treatment effects on vertebral body (VB) strength. Since strength predictions from microCT-derived FEAs (μFEA) have not been validated against experimental measurements of mouse VB strength, a parametric analysis exploring material and failure definitions was performed to determine whether elastic μFEAs with linear failure criteria could reasonably assess VB strength in two studies, treatment and genetic, with differences in bone volume fraction between the control and the experimental groups. VBs were scanned with a 12-μm voxel size, and voxels were directly converted to 8-node, hexahedral elements. The coefficient of determination or R (2) between predicted VB strength and experimental VB strength, as determined from compression tests, was 62.3% for the treatment study and 85.3% for the genetic study when using a homogenous tissue modulus (E t) of 18 GPa for all elements, a failure volume of 2%, and an equivalent failure strain of 0.007. The difference between prediction and measurement (that is, error) increased when lowering the failure volume to 0.1% or increasing it to 4%. Using inhomogeneous tissue density-specific moduli improved the R (2) between predicted and experimental strength when compared with uniform E t=18 GPa. Also, the optimum failure volume is higher for the inhomogeneous than for the homogeneous material definition. Regardless of model assumptions, μFEA can assess differences in murine VB strength between experimental groups when the expected difference in strength is at least 20%.

  11. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  12. [A case of shared psychotic disorder (folie à deux) with original aspects associated with cross-cultural elements].

    PubMed

    Cuoco, Valentina; Colletti, Chiara; Anastasia, Annalisa; Weisz, Filippo; Bersani, Giuseppe

    2015-01-01

    Shared psychotic disorder (folie à deux) is a rare condition characterized by the transmission of delusional aspects from a patient (the "dominant partner") to another (the "submissive partner") linked to the first by a close relationship. We report the case of two Moroccan sisters who have experienced a combined delusional episode diagnosed as shared psychotic disorder. In these circumstances, assessment of symptoms from a cross-cultural perspective is a key factor for proper diagnostic evaluation.

  13. Computational modeling of chemo-electro-mechanical coupling: A novel implicit monolithic finite element approach

    PubMed Central

    Wong, J.; Göktepe, S.; Kuhl, E.

    2014-01-01

    Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328

  14. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.

    2016-05-01

    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  15. High numerical aperture diffractive optical elements for neutral atom quantum computing

    NASA Astrophysics Data System (ADS)

    Young, A. L.; Kemme, S. A.; Wendt, J. R.; Carter, T. R.; Samora, S.

    2013-03-01

    The viability of neutral atom based quantum computers is dependent upon scalability to large numbers of qubits. Diffractive optical elements (DOEs) offer the possibility to scale up to many qubit systems by enabling the manipulation of light to collect signal or deliver a tailored spatial trapping pattern. DOEs have an advantage over refractive microoptics since they do not have measurable surface sag, making significantly larger numerical apertures (NA) accessible with a smaller optical component. The smaller physical size of a DOE allows the micro-lenses to be placed in vacuum with the atoms, reducing aberration effects that would otherwise be introduced by the cell walls of the vacuum chamber. The larger collection angle accessible with DOEs enable faster quantum computation speeds. We have designed a set of DOEs for collecting the 852 nm fluorescence from the D2 transition in trapped cesium atoms, and compare these DOEs to several commercially available refractive micro-lenses. The largest DOE is able to collect over 20% of the atom's radiating sphere whereas the refractive micro-optic is able to collect just 8% of the atom's radiating sphere.

  16. COYOTE : a finite element computer program for nonlinear heat conduction problems. Part I, theoretical background.

    SciTech Connect

    Glass, Micheal W.; Hogan, Roy E., Jr.; Gartling, David K.

    2010-03-01

    The need for the engineering analysis of systems in which the transport of thermal energy occurs primarily through a conduction process is a common situation. For all but the simplest geometries and boundary conditions, analytic solutions to heat conduction problems are unavailable, thus forcing the analyst to call upon some type of approximate numerical procedure. A wide variety of numerical packages currently exist for such applications, ranging in sophistication from the large, general purpose, commercial codes, such as COMSOL, COSMOSWorks, ABAQUS and TSS to codes written by individuals for specific problem applications. The original purpose for developing the finite element code described here, COYOTE, was to bridge the gap between the complex commercial codes and the more simplistic, individual application programs. COYOTE was designed to treat most of the standard conduction problems of interest with a user-oriented input structure and format that was easily learned and remembered. Because of its architecture, the code has also proved useful for research in numerical algorithms and development of thermal analysis capabilities. This general philosophy has been retained in the current version of the program, COYOTE, Version 5.0, though the capabilities of the code have been significantly expanded. A major change in the code is its availability on parallel computer architectures and the increase in problem complexity and size that this implies. The present document describes the theoretical and numerical background for the COYOTE program. This volume is intended as a background document for the user's manual. Potential users of COYOTE are encouraged to become familiar with the present report and the simple example analyses reported in before using the program. The theoretical and numerical background for the finite element computer program, COYOTE, is presented in detail. COYOTE is designed for the multi-dimensional analysis of nonlinear heat conduction problems

  17. A new algorithm for computing primitive elements in GF q square

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1978-01-01

    A new method is developed to find primitive elements in the Galois field of sq q elements GF(sqq), where q is a Mersenne prime. Such primitive elements are needed to implement transforms over GF(sq q).

  18. Verification of a non-hydrostatic dynamical core using horizontally spectral element vertically finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-06-01

    The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.

  19. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  20. Computational issues and applications of line-elements to model subsurface flow governed by the modified Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Bakker, Mark; Kuhlman, Kristopher L.

    2011-09-01

    Two new approaches are presented for the accurate computation of the potential due to line elements that satisfy the modified Helmholtz equation with complex parameters. The first approach is based on fundamental solutions in elliptical coordinates and results in products of Mathieu functions. The second approach is based on the integration of modified Bessel functions. Both approaches allow evaluation of the potential at any distance from the element. The computational approaches are applied to model transient flow with the Laplace transform analytic element method. The Laplace domain solution is computed using a combination of point elements and the presented line elements. The time domain solution is obtained through a numerical inversion. Two applications are presented to transient flow fields, which could not be modeled with the Laplace transform analytic element method prior to this work. The first application concerns transient single-aquifer flow to wells near impermeable walls modeled with line-doublets. The second application concerns transient two-aquifer flow to a well near a stream modeled with line-sinks.

  1. Computational optical palpation: micro-scale force mapping using finite-element methods (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wijesinghe, Philip; Sampson, David D.; Kennedy, Brendan F.

    2016-03-01

    Accurate quantification of forces, applied to, or generated by, tissue, is key to understanding many biomechanical processes, fabricating engineered tissues, and diagnosing diseases. Many techniques have been employed to measure forces; in particular, tactile imaging - developed to spatially map palpation-mimicking forces - has shown potential in improving the diagnosis of cancer on the macro-scale. However, tactile imaging often involves the use of discrete force sensors, such as capacitive or piezoelectric sensors, whose spatial resolution is often limited to 1-2 mm. Our group has previously presented a type of tactile imaging, termed optical palpation, in which the change in thickness of a compliant layer in contact with tissue is measured using optical coherence tomography, and surface forces are extracted, with a micro-scale spatial resolution, using a one-dimensional spring model. We have also recently combined optical palpation with compression optical coherence elastography (OCE) to quantify stiffness. A main limitation of this work, however, is that a one-dimensional spring model is insufficient in describing the deformation of mechanically heterogeneous tissue with uneven boundaries, generating significant inaccuracies in measured forces. Here, we present a computational, finite-element method, which we term computational optical palpation. In this technique, by knowing the non-linear mechanical properties of the layer, and from only the axial component of displacement measured by phase-sensitive OCE, we can estimate, not only the axial forces, but the three-dimensional traction forces at the layer-tissue interface. We use a non-linear, three-dimensional model of deformation, which greatly increases the ability to accurately measure force and stiffness in complex tissues.

  2. Using Finite Volume Element Definitions to Compute the Gravitation of Irregular Small Bodies

    NASA Astrophysics Data System (ADS)

    Zhao, Y. H.; Hu, S. C.; Wang, S.; Ji, J. H.

    2015-03-01

    In the orbit design procedure of the small bodies exploration missions, it's important to take the effect of the gravitation of the small bodies into account. However, a majority of the small bodies in the solar system are irregularly shaped with non-uniform density distribution which makes it difficult to precisely calculate the gravitation of these bodies. This paper proposes a method to model the gravitational field of an irregularly shaped small body and calculate the corresponding spherical harmonic coefficients. This method is based on the shape of the small bodies resulted from the light curve data via observation, and uses finite volume element to approximate the body shape. The spherical harmonic parameters could be derived numerically by computing the integrals according to their definition. Comparison with the polyhedral method is shown in our works. We take the asteroid (433) Eros as an example. Spherical harmonic coefficients resulted from this method are compared with the results derived from the track data obtained by NEAR (Near-Earth Asteroid Rendezvous) detector. The comparison shows that the error of C_{20} is less than 2%. The spherical harmonic coefficients of (1996) FG3 which is a selected target in our future exploration mission are computed. Taking (4179) Toutatis, the target body in Chang'e 2's flyby mission, for example, the gravitational field is calculated combined with the shape model from radar data, which provides theoretical basis for analyzing the soil distribution and flow from the optical image obtained in the mission. This method is applied to uneven density distribution objects, and could be used to provide reliable gravity field data of small bodies for orbit design and landing in the future exploration missions.

  3. Computational identification of new structured cis-regulatory elements in the 3'-untranslated region of human protein coding genes.

    PubMed

    Chen, Xiaowei Sylvia; Brown, Chris M

    2012-10-01

    Messenger ribonucleic acids (RNAs) contain a large number of cis-regulatory RNA elements that function in many types of post-transcriptional regulation. These cis-regulatory elements are often characterized by conserved structures and/or sequences. Although some classes are well known, given the wide range of RNA-interacting proteins in eukaryotes, it is likely that many new classes of cis-regulatory elements are yet to be discovered. An approach to this is to use computational methods that have the advantage of analysing genomic data, particularly comparative data on a large scale. In this study, a set of structural discovery algorithms was applied followed by support vector machine (SVM) classification. We trained a new classification model (CisRNA-SVM) on a set of known structured cis-regulatory elements from 3'-untranslated regions (UTRs) and successfully distinguished these and groups of cis-regulatory elements not been strained on from control genomic and shuffled sequences. The new method outperformed previous methods in classification of cis-regulatory RNA elements. This model was then used to predict new elements from cross-species conserved regions of human 3'-UTRs. Clustering of these elements identified new classes of potential cis-regulatory elements. The model, training and testing sets and novel human predictions are available at: http://mRNA.otago.ac.nz/CisRNA-SVM.

  4. Applications of the Space-Time Conservation Element and Solution Element (CE/SE) Method to Computational Aeroacoustic Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Himansu, Ananda; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The Internal Propagation problems, Fan Noise problem, and Turbomachinery Noise problems are solved using the space-time conservation element and solution element (CE/SE) method. The problems in internal propagation problems address the propagation of sound waves through a nozzle. Both the nonlinear and linear quasi 1D Euler equations are solved. Numerical solutions are presented and compared with the analytical solution. The fan noise problem concerns the effect of the sweep angle on the acoustic field generated by the interaction of a convected gust with a cascade of 3D flat plates. A parallel version of the 3D CE/SE Euler solver is developed and employed to obtain numerical solutions for a family of swept flat plates. Numerical solutions for sweep angles of 0, 5, 10, and 15 deg are presented. The turbomachinery problems describe the interaction of a 2D vortical gust with a cascade of flat-plate airfoils with/without a downstream moving grid. The 2D nonlinear Euler Equations are solved and the converged numerical solutions are presented and compared with the corresponding analytical solution. All the comparisons demonstrate that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple and efficient manner. Furthermore, the simple non-reflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well in 1D, 2D and 3D problems.

  5. A hybrid computational approach for the interactions between river flow and porous sediment bed covered with large roughness elements

    NASA Astrophysics Data System (ADS)

    Liu, X.

    2013-12-01

    In many natural and human-impacted rivers, the porous sediment beds are either fully or partially covered by large roughness elements, such as gravels and boulders. The existence of these large roughness elements, which are in direct contact with the turbulent river flow, changes the dynamics of mass and momentum transfer across the river bed. It also impacts the overall hydraulics in the river channel and over time, indirectly influences the geomorphological evolution of the system. Ideally, one should resolve each of these large roughness elements in a computational fluid model. This approach is apparently not feasible due to the prohibitive computational cost. Considering a typical river bed with armoring, the distribution of sediment sizes usually shows significant vertical variations. Computationally, it poses great challenge to resolve all the size scales. Similar multiscale problem exists in the much broader porous media flow field. To cope with this, we propose a hybrid computational approach where the large surface roughness elements are resolved using immersed boundary method and sediment layers below (usually finer) are modeled by adding extra drag terms in momentum equations. Large roughness elements are digitized using a 3D laser scanner. They are put into the computational domain using the collision detection and rigid body dynamics algorithms which guarantees realistic and physically-correct spatial arrangement of the surface elements. Simulation examples have shown the effectiveness of the hybrid approach which captures the effect of the surface roughness on the turbulent flow as well as the hyporheic flow pattern in and out of the bed.

  6. [Numerical finite element modeling of custom car seat using computer aided design].

    PubMed

    Huang, Xuqi; Singare, Sekou

    2014-02-01

    A good cushion can not only provide the sitter with a high comfort, but also control the distribution of the hip pressure to reduce the incidence of diseases. The purpose of this study is to introduce a computer-aided design (CAD) modeling method of the buttocks-cushion using numerical finite element (FE) simulation to predict the pressure distribution on the buttocks-cushion interface. The buttock and the cushion model geometrics were acquired from a laser scanner, and the CAD software was used to create the solid model. The FE model of a true seated individual was developed using ANSYS software (ANSYS Inc, Canonsburg, PA). The model is divided into two parts, i.e. the cushion model made of foam and the buttock model represented by the pelvis covered with a soft tissue layer. Loading simulations consisted of imposing a vertical force of 520N on the pelvis, corresponding to the weight of the user upper extremity, and then solving iteratively the system.

  7. A Hybrid FPGA/Tilera Compute Element for Autonomous Hazard Detection and Navigation

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Werner, Robert A.; Carson, John M., III; Khanoyan, Garen; Stern, Ryan A.; Trawny, Nikolas

    2013-01-01

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  8. CAST2D: A finite element computer code for casting process modeling

    SciTech Connect

    Shapiro, A.B.; Hallquist, J.O.

    1991-10-01

    CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).

  9. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  10. Computational hydrodynamics of animal swimming: boundary element method and three-dimensional vortex wake structure.

    PubMed

    Cheng, J Y; Chahine, G L

    2001-12-01

    The slender body theory, lifting surface theories, and more recently panel methods and Navier-Stokes solvers have been used to study the hydrodynamics of fish swimming. This paper presents progress on swimming hydrodynamics using a boundary integral equation method (or boundary element method) based on potential flow model. The unsteady three-dimensional BEM code 3DynaFS that we developed and used is able to model realistic body geometries, arbitrary movements, and resulting wake evolution. Pressure distribution over the body surface, vorticity in the wake, and the velocity field around the body can be computed. The structure and dynamic behavior of the vortex wakes generated by the swimming body are responsible for the underlying fluid dynamic mechanisms to realize the high-efficiency propulsion and high-agility maneuvering. Three-dimensional vortex wake structures are not well known, although two-dimensional structures termed 'reverse Karman Vortex Street' have been observed and studied. In this paper, simulations about a swimming saithe (Pollachius virens) using our BEM code have demonstrated that undulatory swimming reduces three-dimensional effects due to substantially weakened tail tip vortex, resulting in a reverse Karman Vortex Street as the major flow pattern in the three-dimensional wake of an undulating swimming fish.

  11. Dust emission modelling around a stockpile by using computational fluid dynamics and discrete element method

    NASA Astrophysics Data System (ADS)

    Derakhshani, S. M.; Schott, D. L.; Lodewijks, G.

    2013-06-01

    Dust emissions can have significant effects on the human health, environment and industry equipment. Understanding the dust generation process helps to select a suitable dust preventing approach and also is useful to evaluate the environmental impact of dust emission. To describe these processes, numerical methods such as Computational Fluid Dynamics (CFD) are widely used, however nowadays particle based methods like Discrete Element Method (DEM) allow researchers to model interaction between particles and fluid flow. In this study, air flow over a stockpile, dust emission, erosion and surface deformation of granular material in the form of stockpile are studied by using DEM and CFD as a coupled method. Two and three dimensional simulations are respectively developed for CFD and DEM methods to minimize CPU time. The standard κ-ɛ turbulence model is used in a fully developed turbulent flow. The continuous gas phase and the discrete particle phase link to each other through gas-particle void fractions and momentum transfer. In addition to stockpile deformation, dust dispersion is studied and finally the accuracy of stockpile deformation results obtained by CFD-DEM modelling will be validated by the agreement with the existing experimental data.

  12. A hybrid FPGA/Tilera compute element for autonomous hazard detection and navigation

    NASA Astrophysics Data System (ADS)

    Villalpando, C. Y.; Werner, R. A.; Carson, J. M.; Khanoyan, G.; Stern, R. A.; Trawny, N.

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  13. A computer simulation for evaluating the array performance of the 10-m/phi/ 5-element super-synthesis telescope

    NASA Astrophysics Data System (ADS)

    Morita, K.-I.; Ishiguro, M.

    1980-03-01

    The array performance in several successive configurations was examined for the 10-m(phi) 5-element super-synthesis telescope. The number of (u, v) samples was used as a criterion of optimum (u, v) coverages. The optimum solution for a given declination was obtained by a random trial method. The performance was evaluated by computer simulation using model brightness distributions.

  14. Computational discovery of soybean promoter cis-regulatory elements for the construction of soybean cyst nematode inducible synthetic promoters

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computational methods offer great hope but limited accuracy in the prediction of functional cis-regulatory elements; improvements are needed to enable synthetic promoter design. We applied an ensemble strategy for de novo soybean cyst nematode (SCN)-inducible motif discovery among promoters of 18 co...

  15. The computation of ionization potentials for second-row elements by ab initio and density functional theory methods

    SciTech Connect

    Jursic, B.S.

    1996-12-31

    Up to four ionization potentials of elements from the second-row of the periodic table were computed using the ab initio (HF, MP2, MP3, MP4, QCISD, GI, G2, and G2MP2) and DFT (B3LY, B3P86, B3PW91, XALPHA, HFS, HFB, BLYP, BP86, BPW91, BVWN, XAPLY, XAP86, XAPW91, XAVWN, SLYR SP86, SPW91 and SVWN) methods. In all of the calculations, the large 6-311++G(3df,3pd) gaussian type of basis set was used. The computed values were compared with the experimental results and suitability of the ab initio and DFF methods were discussed, in regard to reproducing the experimental data. From the computed ionization potentials of the second-row elements, it can be concluded that the HF ab initio computation is not capable of reproducing the experimental results. The computed ionization potentials are too low. However, by using the ab initio methods that include electron correlation, the computed IPs are becoming much closer to the experimental values. In all cases, with the exception of the first ionization potential for oxygen, the G2 computation result produces ionization potentials that are indistinguishable from the experimental results.

  16. Numerical computation of transonic flows by finite-element and finite-difference methods

    NASA Technical Reports Server (NTRS)

    Hafez, M. M.; Wellford, L. C.; Merkle, C. L.; Murman, E. M.

    1978-01-01

    Studies on applications of the finite element approach to transonic flow calculations are reported. Different discretization techniques of the differential equations and boundary conditions are compared. Finite element analogs of Murman's mixed type finite difference operators for small disturbance formulations were constructed and the time dependent approach (using finite differences in time and finite elements in space) was examined.

  17. Wing-Body Aeroelasticity Using Finite-Difference Fluid/Finite-Element Structural Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1993-01-01

    This paper presents a procedure for computing the aeroelasticity of wing-body configurations on multiple-instruction, multiple-data (MIMD) parallel computers. In this procedure, fluids are modeled using Euler equations discretized by a finite difference method, and structures are modeled using finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. A parallel integration scheme is used to compute aeroelastic responses by solving the coupled fluid and structural equations concurrently while keeping modularity of each discipline. The present procedure is validated by computing the aeroelastic response of a wing and comparing with experiment. Aeroelastic computations are illustrated for a High Speed Civil Transport type wing-body configuration.

  18. Finite element computation of a viscous compressible free shear flow governed by the time dependent Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Blanchard, D. K.

    1975-01-01

    A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.

  19. Positive and Negative Aspects of the IWB and Tablet Computers in the First Grade of Primary School: A Multiple-Perspective Approach

    ERIC Educational Resources Information Center

    Fekonja-Peklaj, Urška; Marjanovic-Umek, Ljubica

    2015-01-01

    The aim of this qualitative study was to evaluate the positive and negative aspects of the interactive whiteboard (IWB) and tablet computers use in the first grade of primary school from the perspectives of three groups of evaluators, namely the teachers, the pupils and an independent observer. The sample included three first grade classes with…

  20. Some aspects of adapting computational mesh to complex flow domains and structures with application to blown shock layer and base flow

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Lombard, M. P.; Menees, G. P.; Yang, J. Y.

    1980-01-01

    Several aspects connected with the notion of computation with flow oriented mesh systems are presented. Simple, effective approaches to the ideas discussed are demonstrated in current applications to blown forebody shock layer flow and full bluff body shock layer flow including the massively separated wake region.

  1. The computational structural mechanics testbed generic structural-element processor manual

    NASA Technical Reports Server (NTRS)

    Stanley, Gary M.; Nour-Omid, Shahram

    1990-01-01

    The usage and development of structural finite element processors based on the CSM Testbed's Generic Element Processor (GEP) template is documented. By convention, such processors have names of the form ESi, where i is an integer. This manual is therefore intended for both Testbed users who wish to invoke ES processors during the course of a structural analysis, and Testbed developers who wish to construct new element processors (or modify existing ones).

  2. A new submodelling technique for multi-scale finite element computation of electromagnetic fields: Application in bioelectromagnetism

    NASA Astrophysics Data System (ADS)

    Aristovich, K. Y.; Khan, S. H.

    2010-07-01

    Complex multi-scale Finite Element (FE) analyses always involve high number of elements and therefore require very long time of computations. This is caused by the fact, that considered effects on smaller scales have greater influences on the whole model and larger scales. Thus, mesh density should be as high as required by the smallest scale factor. New submodelling routine has been developed to sufficiently decrease the time of computation without loss of accuracy for the whole solution. The presented approach allows manipulation of different mesh sizes on different scales and, therefore total optimization of mesh density on each scale and transfer results automatically between the meshes corresponding to respective scales of the whole model. Unlike classical submodelling routine, the new technique operates with not only transfer of boundary conditions but also with volume results and transfer of forces (current density load in case of electromagnetism), which allows the solution of full Maxwell's equations in FE space. The approach was successfully implemented for electromagnetic solution in the forward problem of Magnetic Field Tomography (MFT) based on Magnetoencephalography (MEG), where the scale of one neuron was considered as the smallest and the scale of whole-brain model as the largest. The time of computation was reduced about 100 times, with the initial requirements of direct computations without submodelling routine of 10 million elements.

  3. Design of a massively parallel computer using bit serial processing elements

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing

    1995-01-01

    A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.

  4. Evaluating micas in petrologic and metallogenic aspect: I-definitions and structure of the computer program MICA +

    NASA Astrophysics Data System (ADS)

    Yavuz, Fuat

    2003-12-01

    Micas are significant ferromagnesian minerals in felsic to mafic igneous, metamorphic, and hydrothermal rocks. Because of their considerable potential to reveal the physicochemical conditions of magmas in terms of petrologic and metallogenic aspects, mica chemistry is used extensively in the earth sciences. For example, the composition of phlogopite and biotite can be used to evaluate the intensive thermodynamic parameters of temperature ( T, °C), oxygen fugacity ( fO 2), and water fugacity ( fH 2O) of magmatic rocks. The halogen contents of micas permit the estimation of the fluorine and chlorine fugacities that may be used in understanding the metal transportation and deposition processes in hydrothermal ore deposits. The Mica + computer program has been written to edit and store electron-microprobe or wet-chemical mica analyses. This software calculates structural formulae and shares out the calculated anions into the I, M, T, and A sites. Mica + classifies micas in terms of composition and octahedral site-occupancy. It also calculates the intensive parameters such as fO 2, T, and fH 2O from the composition of biotite in equilibrium with K-feldspar and magnetite. Using the calculated F-OH and Cl-OH exchange systematics and various log ratios ( fH 2O/ fHF, fH 2O/ fHCl, fHCl/ fHF, XCl/ XOH, XF/ XOH, XF/ XCl) of mica analyses. Mica + gives valuable determinations about the characteristics of hydrothermal fluids associated with alteration and mineralization processes. The program output is generally in the form of screen outputs. However, by using the "Grf" files that come up with this program they can be visualized under the Grapher software both as binary and ternary diagrams. Mica analyses subjected to the Mica + program were calculated on the basis of 22+ z positive charges taking into account the procedure by the Commission on New Mineral Names Mica Subcommittee of 1998.

  5. TORO II: A finite element computer program for nonlinear quasi-static problems in electromagnetics: Part 1, Theoretical background

    SciTech Connect

    Gartling, D.K.

    1996-05-01

    The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.

  6. Inductively coupled plasma-atomic emission spectroscopy: a computer controlled, scanning monochromator system for the rapid determination of the elements

    SciTech Connect

    Floyd, M.A.

    1980-03-01

    A computer controlled, scanning monochromator system specifically designed for the rapid, sequential determination of the elements is described. The monochromator is combined with an inductively coupled plasma excitation source so that elements at major, minor, trace, and ultratrace levels may be determined, in sequence, without changing experimental parameters other than the spectral line observed. A number of distinctive features not found in previously described versions are incorporated into the system here described. Performance characteristics of the entire system and several analytical applications are discussed.

  7. Research on Quantum Authentication Methods for the Secure Access Control Among Three Elements of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Dong, Yumin; Xiao, Shufen; Ma, Hongyang; Chen, Libo

    2016-12-01

    Cloud computing and big data have become the developing engine of current information technology (IT) as a result of the rapid development of IT. However, security protection has become increasingly important for cloud computing and big data, and has become a problem that must be solved to develop cloud computing. The theft of identity authentication information remains a serious threat to the security of cloud computing. In this process, attackers intrude into cloud computing services through identity authentication information, thereby threatening the security of data from multiple perspectives. Therefore, this study proposes a model for cloud computing protection and management based on quantum authentication, introduces the principle of quantum authentication, and deduces the quantum authentication process. In theory, quantum authentication technology can be applied in cloud computing for security protection. This technology cannot be cloned; thus, it is more secure and reliable than classical methods.

  8. Technical Profile of Seven Data Element Dictionary/Directory Systems. Computer Science & Technology Series.

    ERIC Educational Resources Information Center

    Leong-Hong, Belkis; Marron, Beatrice

    A Data Element Dictionary/Directory (DED/D) is a software tool that is used to control and manage data elements in a uniform manner. It can serve data base administrators, systems analysts, software designers, and programmers by providing a central repository for information about data resources across organization and application lines. This…

  9. Virtual garden computer program for use in exploring the elements of biodiversity people want in cities.

    PubMed

    Shwartz, Assaf; Cheval, Helene; Simon, Laurent; Julliard, Romain

    2013-08-01

    Urban ecology is emerging as an integrative science that explores the interactions of people and biodiversity in cities. Interdisciplinary research requires the creation of new tools that allow the investigation of relations between people and biodiversity. It has been established that access to green spaces or nature benefits city dwellers, but the role of species diversity in providing psychological benefits remains poorly studied. We developed a user-friendly 3-dimensional computer program (Virtual Garden [www.tinyurl.com/3DVirtualGarden]) that allows people to design their own public or private green spaces with 95 biotic and abiotic features. Virtual Garden allows researchers to explore what elements of biodiversity people would like to have in their nearby green spaces while accounting for other functions that people value in urban green spaces. In 2011, 732 participants used our Virtual Garden program to design their ideal small public garden. On average gardens contained 5 different animals, 8 flowers, and 5 woody plant species. Although the mathematical distribution of flower and woody plant richness (i.e., number of species per garden) appeared to be similar to what would be expected by random selection of features, 30% of participants did not place any animal species in their gardens. Among those who placed animals in their gardens, 94% selected colorful species (e.g., ladybug [Coccinella septempunctata], Great Tit [Parus major], and goldfish), 53% selected herptiles or large mammals, and 67% selected non-native species. Older participants with a higher level of education and participants with a greater concern for nature designed gardens with relatively higher species richness and more native species. If cities are to be planned for the mutual benefit of people and biodiversity and to provide people meaningful experiences with urban nature, it is important to investigate people's relations with biodiversity further. Virtual Garden offers a standardized

  10. A new material mapping procedure for quantitative computed tomography-based, continuum finite element analyses of the vertebra.

    PubMed

    Unnikrishnan, Ginu U; Morgan, Elise F

    2011-07-01

    Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and

  11. COMGEN: A computer program for generating finite element models of composite materials at the micro level

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.

    1990-01-01

    COMGEN (Composite Model Generator) is an interactive FORTRAN program which can be used to create a wide variety of finite element models of continuous fiber composite materials at the micro level. It quickly generates batch or session files to be submitted to the finite element pre- and postprocessor PATRAN based on a few simple user inputs such as fiber diameter and percent fiber volume fraction of the composite to be analyzed. In addition, various mesh densities, boundary conditions, and loads can be assigned easily to the models within COMGEN. PATRAN uses a session file to generate finite element models and their associated loads which can then be translated to virtually any finite element analysis code such as NASTRAN or MARC.

  12. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.

  13. CUERVO: A finite element computer program for nonlinear scalar transport problems

    SciTech Connect

    Sirman, M.B.; Gartling, D.K.

    1995-11-01

    CUERVO is a finite element code that is designed for the solution of multi-dimensional field problems described by a general nonlinear, advection-diffusion equation. The code is also applicable to field problems described by diffusion, Poisson or Laplace equations. The finite element formulation and the associated numerical methods used in CUERVO are outlined here; detailed instructions for use of the code are also presented. Example problems are provided to illustrate the use of the code.

  14. On Computing the Pressure by the p Version of the Finite Element Method for Stokes Problem

    DTIC Science & Technology

    1990-02-15

    approximation of saddlepoint prob- lems arising from Lagrangian multipliers. RAIRO , 8:129-151, 1974. [9] M. Dauge. Stationary Stokes and Navier-Stokes systems...Jensen and M. Vogelius. Divergence stability in connection with the p version of the finite element method. RAIRO , Modelisation Math. Anal. Numer., 1990...element method for elliptic problems of order 2 1. RAIRO , Modelisation Math. Anal. Numer., 24:107-146, 1990. 1261 M. Suri. On the stability and convergence

  15. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology

  16. Implementation of a blade element UH-60 helicopter simulation on a parallel computer architecture in real-time

    NASA Technical Reports Server (NTRS)

    Moxon, Bruce C.; Green, John A.

    1990-01-01

    A high-performance platform for development of real-time helicopter flight simulations based on a simulation development and analysis platform combining a parallel simulation development and analysis environment with a scalable multiprocessor computer system is described. Simulation functional decomposition is covered, including the sequencing and data dependency of simulation modules and simulation functional mapping to multiple processors. The multiprocessor-based implementation of a blade-element simulation of the UH-60 helicopter is presented, and a prototype developed for a TC2000 computer is generalized in order to arrive at a portable multiprocessor software architecture. It is pointed out that the proposed approach coupled with a pilot's station creates a setting in which simulation engineers, computer scientists, and pilots can work together in the design and evaluation of advanced real-time helicopter simulations.

  17. Parallel Object-Oriented Computation Applied to a Finite Element Problem

    NASA Technical Reports Server (NTRS)

    Weissman, Jon B.; Grimshaw, Andrew S.; Ferraro, Robert

    1993-01-01

    The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes, extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used.

  18. Isoparametric 3-D Finite Element Mesh Generation Using Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kayrak, C.; Ozsoy, T.

    1985-01-01

    An isoparametric 3-D finite element mesh generator was developed with direct interface to an interactive geometric modeler program called POLYGON. POLYGON defines the model geometry in terms of boundaries and mesh regions for the mesh generator. The mesh generator controls the mesh flow through the 2-dimensional spans of regions by using the topological data and defines the connectivity between regions. The program is menu driven and the user has a control of element density and biasing through the spans and can also apply boundary conditions, loads interactively.

  19. Analytical model and finite element computation of braking torque in electromagnetic retarder

    NASA Astrophysics Data System (ADS)

    Ye, Lezhi; Yang, Guangzhao; Li, Desheng

    2014-12-01

    An analytical model has been developed for analyzing the braking torque in electromagnetic retarder by flux tube and armature reaction method. The magnetic field distribution in air gap, the eddy current induced in the rotor and the braking torque are calculated by the developed model. Two-dimensional and three-dimensional finite element models for retarder have also been developed. Results from the analytical model are compared with those from finite element models. The validity of these three models is checked by the comparison of the theoretical predictions and the measurements from an experimental prototype. The influencing factors of braking torque have been studied.

  20. Microscopy and elemental analysis in tissue samples using computed microtomography with synchrotron x-rays

    SciTech Connect

    Spanne, P.; Rivers, M.L.

    1988-01-01

    The initial development shows that CMT using synchrotron x-rays can be developed to ..mu..m spatial resolution and perhaps even better. This creates a new microscopy technique which is of special interest in morphological studies of tissues, since no chemical preparation or slicing of the sample is necessary. The combination of CMT with spatial resolution in the ..mu..m range and elemental mapping with sensitivity in the ppM range results in a new tool for elemental mapping at the cellular level. 7 refs., 1 fig.

  1. Elements of Mathematics, Book O: Intuitive Background. Chapter 16, Introduction to Computer Programming.

    ERIC Educational Resources Information Center

    Exner, Robert; And Others

    The sixteen chapters of this book provide the core material for the Elements of Mathematics Program, a secondary sequence developed for highly motivated students with strong verbal abilities. The sequence is based on a functional-relational approach to mathematics teaching, and emphasizes teaching by analysis of real-life situations. This text is…

  2. TEnest 2.0: Computational annotation and visualization of nested transposable elements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Grass genomes are highly repetitive, for example, Oryza sativa (rice) contains 35% repeat sequences, Zea mays (maize) comprise 75%, and Triticum aestivum (wheat) includes approximately 80%. Most of these repeats occur as abundant transposable elements (TE), which present unique challenges to sequen...

  3. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    EPA Science Inventory

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  4. MPSalsa Version 1.5: A Finite Element Computer Program for Reacting Flow Problems: Part 1 - Theoretical Development

    SciTech Connect

    Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.; Moffat, H.K.; Salinger, A.G.; Schmidt, R.C.; Shadid, J.N.; Smith, T.M.

    1999-01-01

    The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurring in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.

  5. Dynamic Load Balancing for Finite Element Calculations on Parallel Computers. Chapter 1

    NASA Technical Reports Server (NTRS)

    Pramono, Eddy; Simon, Horst D.; Sohn, Andrew; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a frame work is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine SP2.

  6. Computational efficiency of numerical approximations of tangent moduli for finite element implementation of a fiber-reinforced hyperelastic material model.

    PubMed

    Liu, Haofei; Sun, Wei

    2016-01-01

    In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models.

  7. Computation of Dancoff Factors for Fuel Elements Incorporating Randomly Packed TRISO Particles

    SciTech Connect

    J. L. Kloosterman; Abderrafi M. Ougouag

    2005-01-01

    A new method for estimating the Dancoff factors in pebble beds has been developed and implemented within two computer codes. The first of these codes, INTRAPEB, is used to compute Dancoff factors for individual pebbles taking into account the random packing of TRISO particles within the fuel zone of the pebble and explicitly accounting for the finite geometry of the fuel kernels. The second code, PEBDAN, is used to compute the pebble-to-pebble contribution to the overall Dancoff factor. The latter code also accounts for the finite size of the reactor vessel and for the proximity of reflectors, as well as for fluctuations in the pebble packing density that naturally arises in pebble beds.

  8. Three Aspects of PLATO Use at Chanute AFB: CBE Production Techniques, Computer-Aided Management, Formative Development of CBE Lessons.

    ERIC Educational Resources Information Center

    Klecka, Joseph A.

    This report describes various aspects of lesson production and use of the PLATO system at Chanute Air Force Base. The first chapter considers four major factors influencing lesson production: (1) implementation of the "lean approach," (2) the Instructional Systems Development (ISD) role in lesson production, (3) the transfer of…

  9. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    NASA Astrophysics Data System (ADS)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  10. Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography

    SciTech Connect

    Fermi Research Alliance; Northern Illinois University

    2015-07-15

    Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second generation proton computed tomography system with a goal of demonstrating the feasibility of three dimensional imaging within clinically realistic imaging times. The second generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. Figure 1 shows the schematic layout of the PCT system. The data acquisition sends the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three dimensional imaging algorithms. The Fermilab Particle Physic Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.

  11. Three-Dimensional Effects on Multi-Element High Lift Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Watson, Ralph D.

    2002-01-01

    In an effort to discover the causes for disagreement between previous 2-D computations and nominally 2-D experiment for flow over the 3-clement McDonnell Douglas 30P-30N airfoil configuration at high lift, a combined experimental/CFD investigation is described. The experiment explores several different side-wall boundary layer control venting patterns, document's venting mass flow rates, and looks at corner surface flow patterns. The experimental angle of attack at maximum lift is found to be sensitive to the side wall venting pattern: a particular pattern increases the angle of attack at maximum lift by at least 2 deg. A significant amount of spanwise pressure variation is present at angles of attack near maximum lift. A CFD study using 3-D structured-grid computations, which includes the modeling of side-wall venting, is employed to investigate 3-D effects of the flow. Side-wall suction strength is found to affect the angle at which maximum lift is predicted. Maximum lift in the CFD is shown to be limited by the growth of all off-body corner flow vortex and consequent increase in spanwise pressure variation and decrease in circulation. The 3-D computations with and without wall venting predict similar trends to experiment at low angles of attack, but either stall too earl or else overpredict lift levels near maximum lift by as much as 5%. Unstructured-grid computations demonstrate that mounting brackets lower die the levels near maximum lift conditions.

  12. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    SciTech Connect

    Candel, A; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; Syratchev, I.; /CERN

    2009-06-19

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  13. Computations and generation of elements on the Hopf algebra of Feynman graphs

    NASA Astrophysics Data System (ADS)

    Borinsky, Michael

    2015-05-01

    Two programs, feyngen and feyncop, were developed. feyngen is designed to generate high loop order Feynman graphs for Yang-Mills, QED and ϕk theories. feyncop can compute the coproduct of these graphs on the underlying Hopf algebra of Feynman graphs. The programs can be validated by exploiting zero dimensional field theory combinatorics and identities on the Hopf algebra which follow from the renormalizability of the theories. A benchmark for both programs was made.

  14. Books and monographs on finite element technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1985-01-01

    The present paper proviees a listing of all of the English books and some of the foreign books on finite element technology, taking into account also a list of the conference proceedings devoted solely to finite elements. The references are divided into categories. Attention is given to fundamentals, mathematical foundations, structural and solid mechanics applications, fluid mechanics applications, other applied science and engineering applications, computer implementation and software systems, computational and modeling aspects, special topics, boundary element methods, proceedings of symmposia and conferences on finite element technology, bibliographies, handbooks, and historical accounts.

  15. SMPBS: Web server for computing biomolecular electrostatics using finite element solvers of size modified Poisson-Boltzmann equation.

    PubMed

    Xie, Yang; Ying, Jinyong; Xie, Dexuan

    2017-03-30

    SMPBS (Size Modified Poisson-Boltzmann Solvers) is a web server for computing biomolecular electrostatics using finite element solvers of the size modified Poisson-Boltzmann equation (SMPBE). SMPBE not only reflects ionic size effects but also includes the classic Poisson-Boltzmann equation (PBE) as a special case. Thus, its web server is expected to have a broader range of applications than a PBE web server. SMPBS is designed with a dynamic, mobile-friendly user interface, and features easily accessible help text, asynchronous data submission, and an interactive, hardware-accelerated molecular visualization viewer based on the 3Dmol.js library. In particular, the viewer allows computed electrostatics to be directly mapped onto an irregular triangular mesh of a molecular surface. Due to this functionality and the fast SMPBE finite element solvers, the web server is very efficient in the calculation and visualization of electrostatics. In addition, SMPBE is reconstructed using a new objective electrostatic free energy, clearly showing that the electrostatics and ionic concentrations predicted by SMPBE are optimal in the sense of minimizing the objective electrostatic free energy. SMPBS is available at the URL: smpbs.math.uwm.edu © 2017 Wiley Periodicals, Inc.

  16. A Frequency Count of Music Elements in Bahian Folk Songs Using Computer and Hand Analysis: Suggestions for Applications in Music Education.

    ERIC Educational Resources Information Center

    Oliveira, Alda De Jesus

    1997-01-01

    Explores the frequency of selected musical elements in a sample of folk songs from Bahia Brazil using a computer program and manual analysis. Demonstrates that the contents of each beat are comprised of simple rhythmic elements, melodic ranges are within an octave, and most formal structures of the songs consist of four phrases. (CMK)

  17. NASTRAN data generation of helicopter fuselages using interactive graphics. [preprocessor system for finite element analysis using IBM computer

    NASA Technical Reports Server (NTRS)

    Sainsbury-Carter, J. B.; Conaway, J. H.

    1973-01-01

    The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.

  18. An Objective Evaluation of Mass Scaling Techniques Utilizing Computational Human Body Finite Element Models.

    PubMed

    Davis, Matthew L; Scott Gayzik, F

    2016-10-01

    Biofidelity response corridors developed from post-mortem human subjects are commonly used in the design and validation of anthropomorphic test devices and computational human body models (HBMs). Typically, corridors are derived from a diverse pool of biomechanical data and later normalized to a target body habitus. The objective of this study was to use morphed computational HBMs to compare the ability of various scaling techniques to scale response data from a reference to a target anthropometry. HBMs are ideally suited for this type of study since they uphold the assumptions of equal density and modulus that are implicit in scaling method development. In total, six scaling procedures were evaluated, four from the literature (equal-stress equal-velocity, ESEV, and three variations of impulse momentum) and two which are introduced in the paper (ESEV using a ratio of effective masses, ESEV-EffMass, and a kinetic energy approach). In total, 24 simulations were performed, representing both pendulum and full body impacts for three representative HBMs. These simulations were quantitatively compared using the International Organization for Standardization (ISO) ISO-TS18571 standard. Based on these results, ESEV-EffMass achieved the highest overall similarity score (indicating that it is most proficient at scaling a reference response to a target). Additionally, ESEV was found to perform poorly for two degree-of-freedom (DOF) systems. However, the results also indicated that no single technique was clearly the most appropriate for all scenarios.

  19. Computer literacy and attitudes among students in 16 European dental schools: current aspects, regional differences and future trends.

    PubMed

    Mattheos, N; Nattestad, A; Schittek, M; Attström, R

    2002-02-01

    A questionnaire survey was carried out to investigate the competence and attitude of dental students towards computers. The current study presents the findings deriving from 590 questionnaires collected from 16 European dental schools from 9 countries between October 1998 and October 1999. The results suggest that 60% of students use computers for their education, while 72% have access to the Internet. The overall figures, however, disguise major differences between the various universities. Students in Northern and Western Europe seem to rely mostly on university facilities to access the Internet. The same however, is not true for students in Greece and Spain, who appear to depend on home computers. Less than half the students have been exposed to some form of computer literacy education in their universities, with the great majority acquiring their competence in other ways. The Information and Communication Technology (ICT) skills of the average dental student, within this limited sample of dental schools, do not facilitate full use of new media available. In addition, if the observed regional differences are valid, there may be an educational and political problem that could intensify inequalities among professionals in the future. To minimize this potential problem, closer cooperation between academic institutions, with sharing of resources and expertise, is recommended.

  20. Human factors in the presentation of computer-generated information - Aspects of design and application in automated flight traffic

    NASA Technical Reports Server (NTRS)

    Roske-Hofstrand, Renate J.

    1990-01-01

    The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.

  1. Thinking Together: Exploring Aspects of Shared Thinking between Young Children during a Computer-Based Literacy Task

    ERIC Educational Resources Information Center

    Wild, Mary

    2011-01-01

    This study considers in what ways sustained shared thinking between young children aged 5-6 years can be facilitated by working in dyads on a computer-based literacy task. The study considers 107 observational records of 44 children from 6 different schools, in Oxfordshire in the UK, collected over the course of a school year. The study raises…

  2. Power series expansion of the roots of a secular equation containing symbolic elements: Computer algebra and Moseley's law

    NASA Astrophysics Data System (ADS)

    Barnett, Michael P.; Decker, Thomas; Krandick, Werner

    2001-06-01

    We use computer algebra to expand the Pekeris secular determinant for two-electron atoms symbolically, to produce an explicit polynomial in the energy parameter ɛ, with coefficients that are polynomials in the nuclear charge Z. Repeated differentiation of the polynomial, followed by a simple transformation, gives a series for ɛ in decreasing powers of Z. The leading term is linear, consistent with well-known behavior that corresponds to the approximate quadratic dependence of ionization potential on atomic number (Moseley's law). Evaluating the 12-term series for individual Z gives the roots to a precision of 10 or more digits for Z⩾2. This suggests the use of similar tactics to construct formulas for roots vs atomic, molecular, and variational parameters in other eigenvalue problems, in accordance with the general objectives of gradient theory. Matrix elements can be represented by symbols in the secular determinants, enabling the use of analytical expressions for the molecular integrals in the differentiation of the explicit polynomials. The mathematical and computational techniques include modular arithmetic to handle matrix and polynomial operations, and unrestricted precision arithmetic to overcome severe digital erosion. These are likely to find many further applications in computational chemistry.

  3. A novel approach for computing glueball masses and matrix elements in Yang-Mills theories on the lattice

    NASA Astrophysics Data System (ADS)

    Della Morte, Michele; Giusti, Leonardo

    2011-05-01

    We make use of the global symmetries of the Yang-Mills theory on the lattice to design a new computational strategy for extracting glueball masses and matrix elements which achieves an exponential reduction of the statistical error with respect to standard techniques. By generalizing our previous work on the parity symmetry, the partition function of the theory is decomposed into a sum of path integrals each giving the contribution from multiplets of states with fixed quantum numbers associated to parity, charge conjugation, translations, rotations and central conjugations Z N 3. Ratios of path integrals and correlation functions can then be computed with a multi-level Monte Carlo integration scheme whose numerical cost, at a fixed statistical precision and at asymptotically large times, increases power-like with the time extent of the lattice. The strategy is implemented for the SU(3) Yang-Mills theory, and a full-fledged computation of the mass and multiplicity of the lightest glueball with vacuum quantum numbers is carried out at a lattice spacing of 0.17 fm.

  4. The effects of computer game elements in physics instruction software for middle schools: A study of cognitive and affective gains

    NASA Astrophysics Data System (ADS)

    Vasquez, David Alan

    Can the educational effectiveness of physics instruction software for middle schoolers be improved by employing "game elements" commonly found in recreational computer games? This study utilized a selected set of game elements to contextualize and embellish physics word problems with the aim of making such problems more engaging. Game elements used included: (1) a fantasy-story context with developed characters; and (2) high-end graphics and visual effects. The primary purpose of the study was to find out if the added production cost of using such game elements was justified by proportionate gains in physics learning. The theoretical framework for the study was a modified version of Lepper and Malone's "intrinsically-motivating game elements" model. A key design issue in this model is the concept of "endogeneity", or the degree to which the game elements used in educational software are integrated with its learning content. Two competing courseware treatments were custom-designed and produced for the study; both dealt with Newton's first law. The first treatment (T1) was a 45 minute interactive tutorial that featured cartoon characters, color animations, hypertext, audio narration, and realistic motion simulations using the Interactive PhysicsspTM software. The second treatment (T2) was similar to the first except for the addition of approximately three minutes of cinema-like sequences where characters, game objectives, and a science-fiction story premise were described and portrayed with high-end graphics and visual effects. The sample of 47 middle school students was evenly divided between eighth and ninth graders and between boys and girls. Using a pretest/posttest experimental design, the independent variables for the study were: (1) two levels of treatment; (2) gender; and (3) two schools. The dependent variables were scores on a written posttest for both: (1) physics learning, and (2) attitude toward physics learning. Findings indicated that, although

  5. Parallel computation in a three-dimensional elastic-plastic finite-element analysis

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Bigelow, C. A.; Newman, J. C., Jr.

    1992-01-01

    A CRAY parallel processing technique called autotasking was implemented in a three-dimensional elasto-plastic finite-element code. The technique was evaluated on two CRAY supercomputers, a CRAY 2 and a CRAY Y-MP. Autotasking was implemented in all major portions of the code, except the matrix equations solver. Compiler directives alone were not able to properly multitask the code; user-inserted directives were required to achieve better performance. It was noted that the connect time, rather than wall-clock time, was more appropriate to determine speedup in multiuser environments. For a typical example problem, a speedup of 2.1 (1.8 when the solution time was included) was achieved in a dedicated environment and 1.7 (1.6 with solution time) in a multiuser environment on a four-processor CRAY 2 supercomputer. The speedup on a three-processor CRAY Y-MP was about 2.4 (2.0 with solution time) in a multiuser environment.

  6. Computational Study of Laminar Flow Control on a Subsonic Swept Wing Using Discrete Roughness Elements

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; Streett, Craig L.; Carpenter, Mark H.

    2011-01-01

    A combination of parabolized stability equations and secondary instability theory has been applied to a low-speed swept airfoil model with a chord Reynolds number of 7.15 million, with the goals of (i) evaluating this methodology in the context of transition prediction for a known configuration for which roughness based crossflow transition control has been demonstrated under flight conditions and (ii) of analyzing the mechanism of transition delay via the introduction of discrete roughness elements (DRE). Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes, so as to weaken the growth of naturally occurring, linearly more unstable crossflow modes. Therefore, a synthesis of receptivity, linear and nonlinear growth of stationary crossflow disturbances, and the ensuing development of high frequency secondary instabilities is desirable to understand the experimentally observed transition behavior. With further validation, such higher fidelity prediction methodology could be utilized to assess the potential for crossflow transition control at even higher Reynolds numbers, where experimental data is currently unavailable.

  7. Development of a Computationally Efficient, High Fidelity, Finite Element Based Hall Thruster Model

    NASA Technical Reports Server (NTRS)

    Jacobson, David (Technical Monitor); Roy, Subrata

    2004-01-01

    This report documents the development of a two dimensional finite element based numerical model for efficient characterization of the Hall thruster plasma dynamics in the framework of multi-fluid model. Effect of the ionization and the recombination has been included in the present model. Based on the experimental data, a third order polynomial in electron temperature is used to calculate the ionization rate. The neutral dynamics is included only through the neutral continuity equation in the presence of a uniform neutral flow. The electrons are modeled as magnetized and hot, whereas ions are assumed magnetized and cold. The dynamics of Hall thruster is also investigated in the presence of plasma-wall interaction. The plasma-wall interaction is a function of wall potential, which in turn is determined by the secondary electron emission and sputtering yield. The effect of secondary electron emission and sputter yield has been considered simultaneously, Simulation results are interpreted in the light of experimental observations and available numerical solutions in the literature.

  8. Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2016-01-01

    Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.

  9. Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.

  10. An assessment of the performance of the Spanwise Iron Magnet rolling moment generating system for magnetic suspension and balance systems using the finite element computer program GFUN

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1982-01-01

    The development of a powerful method of magnetic roll torque generation is essential before construction of a large magnetic suspension and balance system (LMSBS) can be undertaken. Some preliminary computed data concerning a relatively new dc scheme, referred to as the spanwise iron magnet scheme are presented. Computations made using the finite element computer program 'GFUN' indicate that adequate torque is available for at least a first generation LMSBS. Torque capability appears limited principally by current electromagnet technology.

  11. [Development of a computer program to simulate the predictions of the replaced elements model of Pavlovian conditioning].

    PubMed

    Vogel, Edgar H; Díaz, Claudia A; Ramírez, Jorge A; Jarur, Mary C; Pérez-Acosta, Andrés M; Wagner, Allan R

    2007-08-01

    Despite of the apparent simplicity of Pavlovian conditioning, research on its mechanisms has caused considerable debate, such as the dispute about whether the associated stimuli are coded in an "elementistic"(a compound stimuli is equivalent to the sum of its components) or a "configural" (a compound stimuli is a unique exemplar) fashion. This controversy is evident in the abundant research on the contrasting predictions of elementistic and the configural models. Recently, some mixed solutions have been proposed, which, although they have the advantages of both approaches, are difficult to evaluate due to their complexity. This paper presents a computer program to conduct simulations of a mixed model ( replaced elements model or REM). Instructions and examples are provided to use the simulator for research and educational purposes.

  12. A FORTRAN program to implement the method of finite elements to compute regional and residual anomalies from gravity data

    NASA Astrophysics Data System (ADS)

    Agarwal, B. N. P.; Srivastava, Shalivahan

    2010-07-01

    In view of the several publications on the application of the Finite Element Method (FEM) to compute regional gravity anomaly involving only 8 nodes on the periphery of a rectangular map, we present an interactive FORTRAN program, FEAODD.FOR, for wider applicability of the technique. A brief description of the theory of FEM is presented for the sake of completeness. The efficacy of the program has been demonstrated by analyzing the gravity anomaly over Salt dome, South Houston, USA using two differently oriented rectangular blocks and over chromite deposits, Camaguey, Cuba. The analyses over two sets of data reveal that the outline of the ore body/structure matches well with the maxima of the residuals. Further, the data analyses over South Houston, USA, have revealed that though the broad regional trend remains the same for both the blocks, the magnitudes of the residual anomalies differ approximately by 25% of the magnitude as obtained from previous studies.

  13. Passive element enriched photoacoustic computed tomography (PER PACT) for simultaneous imaging of acoustic propagation properties and light absorption.

    PubMed

    Jose, Jithin; Willemink, Rene G H; Resink, Steffen; Piras, Daniele; van Hespen, J C G; Slump, Cornelis H; Steenbergen, Wiendelt; van Leeuwen, Ton G; Manohar, Srirang

    2011-01-31

    We present a 'hybrid' imaging approach which can image both light absorption properties and acoustic transmission properties of an object in a two-dimensional slice using a computed tomography (CT) photoacoustic imager. The ultrasound transmission measurement method uses a strong optical absorber of small cross-section placed in the path of the light illuminating the sample. This absorber, which we call a passive element acts as a source of ultrasound. The interaction of ultrasound with the sample can be measured in transmission, using the same ultrasound detector used for photoacoustics. Such measurements are made at various angles around the sample in a CT approach. Images of the ultrasound propagation parameters, attenuation and speed of sound, can be reconstructed by inversion of a measurement model. We validate the method on specially designed phantoms and biological specimens. The obtained images are quantitative in terms of the shape, size, location, and acoustic properties of the examined heterogeneities.

  14. Finite element techniques in computational time series analysis of turbulent flows

    NASA Astrophysics Data System (ADS)

    Horenko, I.

    2009-04-01

    In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical

  15. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still

  16. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    SciTech Connect

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-30

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo{sup 99} used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 10{sup 6} cm{sup −1}) in a tube, their delta

  17. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    NASA Astrophysics Data System (ADS)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a

  18. Cochlear Pharmacokinetics with Local Inner Ear Drug Delivery Using a Three-Dimensional Finite-Element Computer Model

    PubMed Central

    Plontke, Stefan K.; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N.

    2006-01-01

    Hypothesis: Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Background: Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. Methods: A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. Results: For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. Conclusion: The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more

  19. A Computational Efficient Method to Assess the Sensitivity of Finite-Element Models: An Illustration With the Hemipelvis.

    PubMed

    O'Rourke, Dermot; Martelli, Saulo; Bottema, Murk; Taylor, Mark

    2016-12-01

    Assessing the sensitivity of a finite-element (FE) model to uncertainties in geometric parameters and material properties is a fundamental step in understanding the reliability of model predictions. However, the computational cost of individual simulations and the large number of required models limits comprehensive quantification of model sensitivity. To quickly assess the sensitivity of an FE model, we built linear and Kriging surrogate models of an FE model of the intact hemipelvis. The percentage of the total sum of squares (%TSS) was used to determine the most influential input parameters and their possible interactions on the median, 95th percentile and maximum equivalent strains. We assessed the surrogate models by comparing their predictions to those of a full factorial design of FE simulations. The Kriging surrogate model accurately predicted all output metrics based on a training set of 30 analyses (R2 = 0.99). There was good agreement between the Kriging surrogate model and the full factorial design in determining the most influential input parameters and interactions. For the median, 95th percentile and maximum equivalent strain, the bone geometry (60%, 52%, and 76%, respectively) was the most influential input parameter. The interactions between bone geometry and cancellous bone modulus (13%) and bone geometry and cortical bone thickness (7%) were also influential terms on the output metrics. This study demonstrates a method with a low time and computational cost to quantify the sensitivity of an FE model. It can be applied to FE models in computational orthopaedic biomechanics in order to understand the reliability of predictions.

  20. Towards drug repositioning: a unified computational framework for integrating multiple aspects of drug similarity and disease similarity.

    PubMed

    Zhang, Ping; Wang, Fei; Hu, Jianying

    2014-01-01

    In response to the high cost and high risk associated with traditional de novo drug discovery, investigation of potential additional uses for existing drugs, also known as drug repositioning, has attracted increasing attention from both the pharmaceutical industry and the research community. In this paper, we propose a unified computational framework, called DDR, to predict novel drug-disease associations. DDR formulates the task of hypothesis generation for drug repositioning as a constrained nonlinear optimization problem. It utilizes multiple drug similarity networks, multiple disease similarity networks, and known drug-disease associations to explore potential new associations among drugs and diseases with no known links. A large-scale study was conducted using 799 drugs against 719 diseases. Experimental results demonstrated the effectiveness of the approach. In addition, DDR ranked drug and disease information sources based on their contributions to the prediction, thus paving the way for prioritizing multiple data sources and building more reliable drug repositioning models. Particularly, some of our novel predictions of drug-disease associations were supported by clinical trials databases, showing that DDR could serve as a useful tool in drug discovery to efficiently identify potential novel uses for existing drugs.

  1. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    SciTech Connect

    Williams, P.T.

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.

  2. Computation of the head-related transfer function via the fast multipole accelerated boundary element method and its spherical harmonic representation.

    PubMed

    Gumerov, Nail A; O'Donovan, Adam E; Duraiswami, Ramani; Zotkin, Dmitry N

    2010-01-01

    The head-related transfer function (HRTF) is computed using the fast multipole accelerated boundary element method. For efficiency, the HRTF is computed using the reciprocity principle by placing a source at the ear and computing its field. Analysis is presented to modify the boundary value problem accordingly. To compute the HRTF corresponding to different ranges via a single computation, a compact and accurate representation of the HRTF, termed the spherical spectrum, is developed. Computations are reduced to a two stage process, the computation of the spherical spectrum and a subsequent evaluation of the HRTF. This representation allows easy interpolation and range extrapolation of HRTFs. HRTF computations are performed for the range of audible frequencies up to 20 kHz for several models including a sphere, human head models [the Neumann KU-100 ("Fritz") and the Knowles KEMAR ("Kemar") manikins], and head-and-torso model (the Kemar manikin). Comparisons between the different cases are provided. Comparisons with the computational data of other authors and available experimental data are conducted and show satisfactory agreement for the frequencies for which reliable experimental data are available. Results show that, given a good mesh, it is feasible to compute the HRTF over the full audible range on a regular personal computer.

  3. Local finite element enrichment strategies for 2D contact computations and a corresponding post-processing scheme

    NASA Astrophysics Data System (ADS)

    Sauer, Roger A.

    2013-08-01

    Recently an enriched contact finite element formulation has been developed that substantially increases the accuracy of contact computations while keeping the additional numerical effort at a minimum reported by Sauer (Int J Numer Meth Eng, 87: 593-616, 2011). Two enrich-ment strategies were proposed, one based on local p-refinement using Lagrange interpolation and one based on Hermite interpolation that produces C 1-smoothness on the contact surface. Both classes, which were initially considered for the frictionless Signorini problem, are extended here to friction and contact between deformable bodies. For this, a symmetric contact formulation is used that allows the unbiased treatment of both contact partners. This paper also proposes a post-processing scheme for contact quantities like the contact pressure. The scheme, which provides a more accurate representation than the raw data, is based on an averaging procedure that is inspired by mortar formulations. The properties of the enrichment strategies and the corresponding post-processing scheme are illustrated by several numerical examples considering sliding and peeling contact in the presence of large deformations.

  4. Parameter study for the finite element modelling of long bones with computed-tomography-imaging-based stiffness distribution.

    PubMed

    Wullschleger, L; Weisse, B; Blaser, D; Fürst, A E

    2010-01-01

    Four radii of different horses were tested in three-point bending and in pure torsion. Detailed finite element (FE) models of these long bones were established by means of computed-tomography (CT) images and tests simulated for both load cases. For the allocation of the local isotropic material stiffness, individual exponential functions were applied whose factor and exponent were determined solely by fitting them to the measured torsional stiffness and bending stiffness of the entire bones. These stiffness functions referring directly to the CT number and having exponents between 1.5 and 2 were in good agreement with Young's moduli subsequently measured from small samples cut from the investigated bones. Based on a model with local orthotropic material definition, an additional parameter study was conducted to verify the sensitivities of the FE analysis results on single variations in the orthotropic elastic constants. This study revealed that the bending test simulations could be enhanced by substantial reduction in Young's moduli in the directions perpendicular to the bone axis; thus, orthotropic material definition is preferable for the FE analysis of long bones.

  5. Computational Identification of Tissue-Specific Splicing Regulatory Elements in Human Genes from RNA-Seq Data

    PubMed Central

    Badr, Eman; ElHefnawi, Mahmoud; Heath, Lenwood S.

    2016-01-01

    Alternative splicing is a vital process for regulating gene expression and promoting proteomic diversity. It plays a key role in tissue-specific expressed genes. This specificity is mainly regulated by splicing factors that bind to specific sequences called splicing regulatory elements (SREs). Here, we report a genome-wide analysis to study alternative splicing on multiple tissues, including brain, heart, liver, and muscle. We propose a pipeline to identify differential exons across tissues and hence tissue-specific SREs. In our pipeline, we utilize the DEXSeq package along with our previously reported algorithms. Utilizing the publicly available RNA-Seq data set from the Human BodyMap project, we identified 28,100 differentially used exons across the four tissues. We identified tissue-specific exonic splicing enhancers that overlap with various previously published experimental and computational databases. A complicated exonic enhancer regulatory network was revealed, where multiple exonic enhancers were found across multiple tissues while some were found only in specific tissues. Putative combinatorial exonic enhancers and silencers were discovered as well, which may be responsible for exon inclusion or exclusion across tissues. Some of the exonic enhancers are found to be co-occurring with multiple exonic silencers and vice versa, which demonstrates a complicated relationship between tissue-specific exonic enhancers and silencers. PMID:27861625

  6. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy

    DOEpatents

    Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.

    2001-01-16

    Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of

  7. Examining the Minimal Required Elements of a Computer-Tailored Intervention Aimed at Dietary Fat Reduction: Results of a Randomized Controlled Dismantling Study

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Oenema, Anke; Dagnelie, Pieter C.; Brug, Johannes

    2008-01-01

    This study investigated the minimally required feedback elements of a computer-tailored dietary fat reduction intervention to be effective in improving fat intake. In all 588 Healthy Dutch adults were randomly allocated to one of four conditions in an randomized controlled trial: (i) feedback on dietary fat intake [personal feedback (P feedback)],…

  8. The Use of Computer Games as an Educational Tool: Identification of Appropriate Game Types and Game Elements.

    ERIC Educational Resources Information Center

    Amory, Alan; Naicker, Kevin; Vincent, Jacky; Adams, Claudia

    1999-01-01

    Describes research with college students that investigated commercial game types and game elements to determine what would be suitable for education. Students rated logic, memory, visualization, and problem solving as important game elements that are used to develop a model that links pedagogical issues with game elements. (Author/LRW)

  9. Biological Aspects of Computer Virology

    NASA Astrophysics Data System (ADS)

    Vlachos, Vasileios; Spinellis, Diomidis; Androutsellis-Theotokis, Stefanos

    Recent malware epidemics proved beyond any doubt that frightful predictions of fast-spreading worms have been well founded. While we can identify and neutralize many types of malicious code, often we are not able to do that in a timely enough manner to suppress its uncontrolled propagation. In this paper we discuss the decisive factors that affect the propagation of a worm and evaluate their effectiveness.

  10. Computational Aspects of Constrained Estimation.

    DTIC Science & Technology

    1982-03-01

    34Dynamic Programming and Ill- Conditioned Linear Systems," Journal of Mathematical Analysis and Applications , 10, 1965, pp. 206-215. 37. J.A. Newkirk...34 Journal of Mathematical Analysis and Applications , 31, 1970, pp. 682-716. 78. J.M. Varah, "Numerical Solution of Ill-Posed Problems Using Interactive

  11. Characterization, chemometric evaluation, and human health-related aspects of essential and toxic elements in Italian honey samples by inductively coupled plasma mass spectrometry.

    PubMed

    Quinto, Maurizio; Miedico, Oto; Spadaccino, Giuseppina; Paglia, Giuseppe; Mangiacotti, Michele; Li, Donghao; Centonze, Diego; Chiaravalle, A Eugenio

    2016-12-01

    Concentration values of 24 elements (Al, As, Ba, Be, Ca, Cd, Co, Cr, Cu, Fe, Ge, Hg, Mn, Mo, Pb, Sb, Se, Sn, Sr, Ti, Tl, U, V, and Zn) were determined in 72 honey samples produced in Italy by inductively coupled plasma mass spectrometry (ICP-MS). Considering the recommended established heavy metal daily intakes for humans, in this perspective, an equilibrated and ordinary honey consumption should not be considered matter of concerns for human health, even if particular attention should be addressed if honey is consumed by children, due to different maximum daily heavy metal intakes. Chemometric analysis of the results obtained highlights heavy metal content differences in honey samples obtained from notoriously polluted zones, confirming then that honey can be considered a bio-indicator of environmental pollution. Finally, Pearson coefficients highlighted correlations among element contents in honey samples.

  12. Hydropower and Environmental Resource Assessment (HERA): a computational tool for the assessment of the hydropower potential of watersheds considering engineering and socio-environmental aspects.

    NASA Astrophysics Data System (ADS)

    Martins, T. M.; Kelman, R.; Metello, M.; Ciarlini, A.; Granville, A. C.; Hespanhol, P.; Castro, T. L.; Gottin, V. M.; Pereira, M. V. F.

    2015-12-01

    The hydroelectric potential of a river is proportional to its head and water flows. Selecting the best development alternative for Greenfield projects watersheds is a difficult task, since it must balance demands for infrastructure, especially in the developing world where a large potential remains unexplored, with environmental conservation. Discussions usually diverge into antagonistic views, as in recent projects in the Amazon forest, for example. This motivates the construction of a computational tool that will support a more qualified debate regarding development/conservation options. HERA provides the optimal head division partition of a river considering technical, economic and environmental aspects. HERA has three main components: (i) pre-processing GIS of topographic and hydrologic data; (ii) automatic engineering and equipment design and budget estimation for candidate projects; (iii) translation of division-partition problem into a mathematical programming model. By integrating an automatic calculation with geoprocessing tools, cloud computation and optimization techniques, HERA makes it possible countless head partition division alternatives to be intrinsically compared - a great advantage with respect to traditional field surveys followed by engineering design methods. Based on optimization techniques, HERA determines which hydro plants should be built, including location, design, technical data (e.g. water head, reservoir area and volume, engineering design (dam, spillways, etc.) and costs). The results can be visualized in the HERA interface, exported to GIS software, Google Earth or CAD systems. HERA has a global scope of application since the main input data area a Digital Terrain Model and water inflows at gauging stations. The objective is to contribute to an increased rationality of decisions by presenting to the stakeholders a clear and quantitative view of the alternatives, their opportunities and threats.

  13. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  14. Energy Finite Element Analysis for Computing the High Frequency Vibration of the Aluminum Testbed Cylinder and Correlating the Results to Test Data

    NASA Technical Reports Server (NTRS)

    Vlahopoulos, Nickolas

    2005-01-01

    The Energy Finite Element Analysis (EFEA) is a finite element based computational method for high frequency vibration and acoustic analysis. The EFEA solves with finite elements governing differential equations for energy variables. These equations are developed from wave equations. Recently, an EFEA method for computing high frequency vibration of structures either in vacuum or in contact with a dense fluid has been presented. The presence of fluid loading has been considered through added mass and radiation damping. The EFEA developments were validated by comparing EFEA results to solutions obtained by very dense conventional finite element models and solutions from classical techniques such as statistical energy analysis (SEA) and the modal decomposition method for bodies of revolution. EFEA results have also been compared favorably with test data for the vibration and the radiated noise generated by a large scale submersible vehicle. The primary variable in EFEA is defined as the time averaged over a period and space averaged over a wavelength energy density. A joint matrix computed from the power transmission coefficients is utilized for coupling the energy density variables across any discontinuities, such as change of plate thickness, plate/stiffener junctions etc. When considering the high frequency vibration of a periodically stiffened plate or cylinder, the flexural wavelength is smaller than the interval length between two periodic stiffeners, therefore the stiffener stiffness can not be smeared by computing an equivalent rigidity for the plate or cylinder. The periodic stiffeners must be regarded as coupling components between periodic units. In this paper, Periodic Structure (PS) theory is utilized for computing the coupling joint matrix and for accounting for the periodicity characteristics.

  15. Effect of specimen-specific anisotropic material properties in quantitative computed tomography-based finite element analysis of the vertebra.

    PubMed

    Unnikrishnan, Ginu U; Barest, Glenn D; Berry, David B; Hussein, Amira I; Morgan, Elise F

    2013-10-01

    Intra- and inter-specimen variations in trabecular anisotropy are often ignored in quantitative computed tomography (QCT)-based finite element (FE) models of the vertebra. The material properties are typically estimated solely from local variations in bone mineral density (BMD), and a fixed representation of elastic anisotropy ("generic anisotropy") is assumed. This study evaluated the effect of incorporating specimen-specific, trabecular anisotropy on QCT-based FE predictions of vertebral stiffness and deformation patterns. Orthotropic material properties estimated from microcomputed tomography data ("specimen-specific anisotropy"), were assigned to a large, columnar region of the L1 centrum (n = 12), and generic-anisotropic material properties were assigned to the remainder of the vertebral body. Results were compared to FE analyses in which generic-anisotropic properties were used throughout. FE analyses were also performed on only the columnar regions. For the columnar regions, the axial stiffnesses obtained from the two categories of material properties were uncorrelated with each other (p = 0.604), and the distributions of minimum principal strain were distinctly different (p ≤ 0.022). In contrast, for the whole vertebral bodies in both axial and flexural loading, the stiffnesses obtained using the two categories of material properties were highly correlated (R2 > 0.82, p < 0.001) with, and were no different (p > 0.359) from, each other. Only moderate variations in strain distributions were observed between the two categories of material properties. The contrasting results for the columns versus vertebrae indicate a large contribution of the peripheral regions of the vertebral body to the mechanical behavior of this bone. In companion analyses on the effect of the degree of anisotropy (DA), the axial stiffnesses of the trabecular column (p < 0.001) and vertebra (p = 0.007) increased with increasing DA. These findings

  16. The effect of in situ/in vitro three-dimensional quantitative computed tomography image voxel size on the finite element model of human vertebral cancellous bone.

    PubMed

    Lu, Yongtao; Engelke, Klaus; Glueer, Claus-C; Morlock, Michael M; Huber, Gerd

    2014-11-01

    Quantitative computed tomography-based finite element modeling technique is a promising clinical tool for the prediction of bone strength. However, quantitative computed tomography-based finite element models were created from image datasets with different image voxel sizes. The aim of this study was to investigate whether there is an influence of image voxel size on the finite element models. In all 12 thoracolumbar vertebrae were scanned prior to autopsy (in situ) using two different quantitative computed tomography scan protocols, which resulted in image datasets with two different voxel sizes (0.29 × 0.29 × 1.3 mm(3) vs 0.18 × 0.18 × 0.6 mm(3)). Eight of them were scanned after autopsy (in vitro) and the datasets were reconstructed with two voxel sizes (0.32 × 0.32 × 0.6 mm(3) vs. 0.18 × 0.18 × 0.3 mm(3)). Finite element models with cuboid volume of interest extracted from the vertebral cancellous part were created and inhomogeneous bilinear bone properties were defined. Axial compression was simulated. No effect of voxel size was detected on the apparent bone mineral density for both the in situ and in vitro cases. However, the apparent modulus and yield strength showed significant differences in the two voxel size group pairs (in situ and in vitro). In conclusion, the image voxel size may have to be considered when the finite element voxel modeling technique is used in clinical applications.

  17. Verification of a non-hydrostatic dynamical core using the horizontal spectral element method and vertical finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-11-01

    The non-hydrostatic (NH) compressible Euler equations for dry atmosphere were solved in a simplified two-dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. By using horizontal SEM, which decomposes the physical domain into smaller pieces with a small communication stencil, a high level of scalability can be achieved. By using vertical FDM, an easy method for coupling the dynamics and existing physics packages can be provided. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind-biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative and integral terms. For temporal integration, a time-split, third-order Runge-Kutta (RK3) integration technique was applied. The Euler equations that were used here are in flux form based on the hydrostatic pressure vertical coordinate. The equations are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate was implemented in this model. We validated the model by conducting the widely used standard tests: linear hydrostatic mountain wave, tracer advection, and gravity wave over the Schär-type mountain, as well as density current, inertia-gravity wave, and rising thermal bubble. The results from these tests demonstrated that the model using the horizontal SEM and the vertical FDM is accurate and robust provided sufficient diffusion is applied. The results with various horizontal resolutions also showed convergence of second-order accuracy due to the accuracy of the time integration scheme and that of the vertical direction, although high-order basis functions were used in the horizontal. By using the 2-D slice model, we effectively showed that the combined spatial

  18. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

    NASA Technical Reports Server (NTRS)

    Harder, R. L.

    1974-01-01

    The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

  19. Generic element processor (application to nonlinear analysis)

    NASA Technical Reports Server (NTRS)

    Stanley, Gary

    1989-01-01

    The focus here is on one aspect of the Computational Structural Mechanics (CSM) Testbed: finite element technology. The approach involves a Generic Element Processor: a command-driven, database-oriented software shell that facilitates introduction of new elements into the testbed. This shell features an element-independent corotational capability that upgrades linear elements to geometrically nonlinear analysis, and corrects the rigid-body errors that plague many contemporary plate and shell elements. Specific elements that have been implemented in the Testbed via this mechanism include the Assumed Natural-Coordinate Strain (ANS) shell elements, developed with Professor K. C. Park (University of Colorado, Boulder), a new class of curved hybrid shell elements, developed by Dr. David Kang of LPARL (formerly a student of Professor T. Pian), other shell and solid hybrid elements developed by NASA personnel, and recently a repackaged version of the workhorse shell element used in the traditional STAGS nonlinear shell analysis code. The presentation covers: (1) user and developer interfaces to the generic element processor, (2) an explanation of the built-in corotational option, (3) a description of some of the shell-elements currently implemented, and (4) application to sample nonlinear shell postbuckling problems.

  20. Mapping hidden potential identity elements by computing the average discriminating power of individual tRNA positions.

    PubMed

    Szenes, Aron; Pál, Gábor

    2012-06-01

    The recently published discrete mathematical method, extended consensus partition (ECP), identifies nucleotide types at each position that are strictly absent from a given sequence set, while occur in other sets. These are defined as discriminating elements (DEs). In this study using the ECP approach, we mapped potential hidden identity elements that discriminate the 20 different tRNA identities. We filtered the tDNA data set for the obligatory presence of well-established tRNA features, and then separately for each identity set, the presence of already experimentally identified strictly present identity elements. The analysis was performed on the three kingdoms of life. We determined the number of DE, e.g. the number of sets discriminated by the given position, for each tRNA position of each tRNA identity set. Then, from the positional DE numbers obtained from the 380 pairwise comparisons of the 20 identity sets, we calculated the average excluding value (AEV) for each tRNA position. The AEV provides a measure on the overall discriminating power of each position. Using a statistical analysis, we show that positional AEVs correlate with the number of already identified identity elements. Positions having high AEV but lacking published identity elements predict hitherto undiscovered tRNA identity elements.

  1. Mapping Hidden Potential Identity Elements by Computing the Average Discriminating Power of Individual tRNA Positions

    PubMed Central

    Szenes, Áron; Pál, Gábor

    2012-01-01

    The recently published discrete mathematical method, extended consensus partition (ECP), identifies nucleotide types at each position that are strictly absent from a given sequence set, while occur in other sets. These are defined as discriminating elements (DEs). In this study using the ECP approach, we mapped potential hidden identity elements that discriminate the 20 different tRNA identities. We filtered the tDNA data set for the obligatory presence of well-established tRNA features, and then separately for each identity set, the presence of already experimentally identified strictly present identity elements. The analysis was performed on the three kingdoms of life. We determined the number of DE, e.g. the number of sets discriminated by the given position, for each tRNA position of each tRNA identity set. Then, from the positional DE numbers obtained from the 380 pairwise comparisons of the 20 identity sets, we calculated the average excluding value (AEV) for each tRNA position. The AEV provides a measure on the overall discriminating power of each position. Using a statistical analysis, we show that positional AEVs correlate with the number of already identified identity elements. Positions having high AEV but lacking published identity elements predict hitherto undiscovered tRNA identity elements. PMID:22378766

  2. Biomechanical aspects of segmented arch mechanics combined with power arm for controlled anterior tooth movement: A three-dimensional finite element study.

    PubMed

    Ozaki, Hiroya; Tominaga, Jun-Ya; Hamanaka, Ryo; Sumi, Mayumi; Chiang, Pao-Chang; Tanaka, Motohiro; Koga, Yoshiyuki; Yoshida, Noriaki

    2015-01-01

    The porpose of this study was to determine the optimal length of power arms for achieving controlled anterior tooth movement in segmented arch mechanics combined with power arm. A three-dimensional finite element method was applied for the simulation of en masse anterior tooth retraction in segmented power arm mechanics. The type of tooth movement, namely, the location of center of rotation of the maxillary central incisor in association with power arm length, was calculated after the retraction force was applied. When a 0.017 × 0.022-in archwire was inserted into the 0.018-in slot bracket, bodily movement was obtained at 9.1 mm length of power arm, namely, at the level of 1.8 mm above the center of resistance. In case a 0.018 × 0.025-in full-size archwire was used, bodily movement of the tooth was produced at the power arm length of 7.0 mm, namely, at the level of 0.3 mm below the center of resistance. Segmented arch mechanics required shorter length of power arms for achieving any type of controlled anterior tooth movement as compared to sliding mechanics. Therefore, this space closing mechanics could be widely applied even for the patients whose gingivobuccal fold is shallow. The segmented arch mechanics combined with power arm could provide higher amount of moment-to-force ratio sufficient for controlled anterior tooth movement without generating friction, and vertical forces when applying retraction force parallel to the occlusal plane. It is, therefore, considered that the segmented power arm mechanics has a simple appliance design and allows more efficient and controllable tooth movement.

  3. The P.K. Yonge Basic Mathematics Computation Skills System: A Program of Individualized Instruction with an Emphasis on Discrete Elements of Computation Skills. Research Monograph No. 33.

    ERIC Educational Resources Information Center

    Massey, Tom E.; McCall, Peter T.

    A program is described that was developed, implemented, and evaluated at the P.K. Yonge Laboratory School at the University of Florida. It was designed to help middle school students to increase competencies in basic computation. The nine criteria guiding development were: 1) individualized instruction; 2) greater student responsibility for…

  4. Administrative Aspects of Human Experimentation.

    ERIC Educational Resources Information Center

    Irvine, George W.

    1992-01-01

    The following administrative aspects of scientific experimentation with human subjects are discussed: the definition of human experimentation; the distinction between experimentation and treatment; investigator responsibility; documentation; the elements and principles of informed consent; and the administrator's role in establishing and…

  5. Field, model, and computer simulation study of some aspects of the origin and distribution of Colorado Plateau-type uranium deposits

    USGS Publications Warehouse

    Ethridge, F.G.; Sunada, D.K.; Tyler, Noel; Andrews, Sarah

    1982-01-01

    Numerous hypotheses have been proposed to account for the nature and distribution of tabular uranium and vanadium-uranium deposits of the Colorado Plateau. In one of these hypotheses it is suggested that the deposits resulted from geochemical reactions at the interface between a relatively stagnant groundwater solution and a dynamic, ore-carrying groundwater solution which permeated the host sandstones (Shawe, 1956; Granger, et al., 1961; Granger, 1968, 1976; and Granger and Warren, 1979). The study described here was designed to investigate some aspects of this hypothesis, particularly the nature of fluid flow in sands and sandstones, the nature and distribution of deposits, and the relations between the deposits and the host sandstones. The investigation, which was divided into three phases, involved physical model, field, and computer simulation studies. During the initial phase of the investigation, physical model studies were conducted in porous-media flumes. These studies verified the fact that humic acid precipitates could form at the interface between a humic acid solution and a potassium aluminum sulfate solution and that the nature and distribution of these precipitates were related to flow phenomena and to the nature and distribution of the host porous-media. During the second phase of the investigation field studies of permeability and porosity patterns in Holocene stream deposits were investigated and the data obtained were used to design more realistic porous media models. These model studies, which simulated actual stream deposits, demonstrated that precipitates possess many characteristics, in terms of their nature and relation to host sandstones, that are similar to ore deposits of the Colorado Plateau. The final phase of the investigation involved field studies of actual deposits, additional model studies in a large indoor flume, and computer simulation studies. The field investigations provided an up-to-date interpretation of the depositional

  6. Finite elements: Theory and application

    NASA Technical Reports Server (NTRS)

    Dwoyer, D. L. (Editor); Hussaini, M. Y. (Editor); Voigt, R. G. (Editor)

    1988-01-01

    Recent advances in FEM techniques and applications are discussed in reviews and reports presented at the ICASE/LaRC workshop held in Hampton, VA in July 1986. Topics addressed include FEM approaches for partial differential equations, mixed FEMs, singular FEMs, FEMs for hyperbolic systems, iterative methods for elliptic finite-element equations on general meshes, mathematical aspects of FEMS for incompressible viscous flows, and gradient weighted moving finite elements in two dimensions. Consideration is given to adaptive flux-corrected FEM transport techniques for CFD, mixed and singular finite elements and the field BEM, p and h-p versions of the FEM, transient analysis methods in computational dynamics, and FEMs for integrated flow/thermal/structural analysis.

  7. Computer Simulation of Blood Flow, Left Ventricular Wall Motion and Their Interrelationship by Fluid-Structure Interaction Finite Element Method

    NASA Astrophysics Data System (ADS)

    Watanabe, Hiroshi; Hisada, Toshiaki; Sugiura, Seiryo; Okada, Jun-Ichi; Fukunari, Hiroshi

    To simulate fluid-structure interaction involved in the contraction of a human left ventricle, a 3D finite element based simulation program incorporating the propagation of excitation and excitation-contraction coupling mechanisms was developed. An ALE finite element method with automatic mesh updating was formulated for large domain changes, and a strong coupling strategy was taken. Under the assumption that the inertias of both fluid and structure are negligible and fluid-structure interaction is restricted to the pressure on the interface, the fluid dynamics part was eliminated from the FSI program, and a static structural FEM code corresponding to the cardiac muscles was also developed. The simulations of the contraction of the left ventricle in normal excitation and arrhythmia demonstrated the capability of the proposed method. Also, the results obtained by the two methods are compared. These simulators can be powerful tools in the clinical practice of heart disease.

  8. 3-D magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on SMP computers - Part I: forward problem and parameter Jacobians

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    We have developed an algorithm, which we call HexMT, for 3-D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permit incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used throughout, including the forward solution, parameter Jacobians and model parameter update. In Part I, the forward simulator and Jacobian calculations are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequencies or small material admittivities, the E-field requires divergence correction. With the help of Hodge decomposition, the correction may be applied in one step after the forward solution is calculated. This allows accurate E-field solutions in dielectric air. The system matrix factorization and source vector solutions are computed using the MKL PARDISO library, which shows good scalability through 24 processor cores. The factorized matrix is used to calculate the forward response as well as the Jacobians of electromagnetic (EM) field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure, several synthetic topographic models and the natural topography of Mount Erebus in Antarctica. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of EM waves normal to the slopes at high frequencies. Run-time tests of the parallelized algorithm indicate that for meshes as large as 176 × 176 × 70 elements, MT forward responses and Jacobians can be calculated in ˜1.5 hr per frequency. Together with an efficient inversion parameter step described in Part II, MT inversion problems of 200-300 stations are computable with total run times

  9. Finite-element nonlinear transient response computer programs PLATE 1 and CIVM-PLATE 1 for the analysis of panels subjected to impulse or impact loads

    NASA Technical Reports Server (NTRS)

    Spilker, R. L.; Witmer, E. A.; French, S. E.; Rodal, J. J. A.

    1980-01-01

    Two computer programs are described for predicting the transient large deflection elastic viscoplastic responses of thin single layer, initially flat unstiffened or integrally stiffened, Kirchhoff-Lov ductile metal panels. The PLATE 1 program pertains to structural responses produced by prescribed externally applied transient loading or prescribed initial velocity distributions. The collision imparted velocity method PLATE 1 program concerns structural responses produced by impact of an idealized nondeformable fragment. Finite elements are used to represent the structure in both programs. Strain hardening and strain rate effects of initially isotropic material are considered.

  10. Robust and portable capacity computing method for many finite element analyses of a high-fidelity crustal structure model aimed for coseismic slip estimation

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hirahara, Kazuro; Hyodo, Mamoru; Hori, Takane; Hori, Muneo

    2016-09-01

    Computation of many Green's functions (GFs) in finite element (FE) analyses of crustal deformation is an essential technique in inverse analyses of coseismic slip estimations. In particular, analysis based on a high-resolution FE model (high-fidelity model) is expected to contribute to the construction of a community standard FE model and benchmark solution. Here, we propose a naive but robust and portable capacity computing method to compute many GFs using a high-fidelity model, assuming that various types of PC clusters are used. The method is based on the master-worker model, implemented using the Message Passing Interface (MPI), to perform robust and efficient input/output operations. The method was applied to numerical experiments of coseismic slip estimation in the Tohoku region of Japan; comparison of the estimated results with those generated using lower-fidelity models revealed the benefits of using a high-fidelity FE model in coseismic slip distribution estimation. Additionally, the proposed method computes several hundred GFs more robustly and efficiently than methods without the master-worker model and MPI.

  11. A Survey of Current Temperature Dependent Elastic-Plastic-Creep Constitutive Laws for Applicability to Finite Element Computer Codes.

    DTIC Science & Technology

    1980-05-31

    data that several phenomena which should be modelled by the constitutive theory are: (1) the Bauschinger effect for reverse loading, (2) the nonunique ...evidence of its importance. Although significant work has been done to obtain working constitutive models, in many cases the theory has not been cast...nonlinear visco- elasticity theory for applicability to the conservation of momentum. Based on physical accuracy as well as computational efficiency the

  12. Computer-Aided Structural Engineering (CASE Project). Finite Element Modeling of Welded Thick Plates for Bonneville Navigation Lock

    DTIC Science & Technology

    1992-05-01

    measurement to SI (metric) units is presented on page 6. .7 E]IEM PLAN r • •FLOW :2 G PILES AT 6’ 56 PILES AT 4’ 0 E .7 EL-8 EL. SDREDGE LINE AT WALL B:OTTOMW...OF TWALL -• •w•’’L-- EL.-32 ELEVATION 4’ OR 6’ F0 . 0 0 0 ---------- --- - s .._ _ ... .... TYPICAL SECTION ( PILE WITH AND WITHOUT COVERPLATES) Figure...guidance by Trade Arbed and AISC was used as a start- ing point to initialize the study. The finite element code ABAQUS was used to evaluate both joint

  13. Computationally Efficient Finite Element Analysis Method Incorporating Virtual Equivalent Projected Model For Metallic Sandwich Panels With Pyramidal Truss Cores

    NASA Astrophysics Data System (ADS)

    Seong, Dae-Yong; Jung, ChangGyun; Yang, Dong-Yol

    2007-05-01

    Metallic sandwich panels composed of two face sheets and cores with low relative density have lightweight characteristics and various static and dynamic load bearing functions. To predict the forming characteristics, performance, and formability of these structured materials, full 3D modeling and analysis involving tremendous computational time and memory are required. Some constitutive continuum models including homogenization approaches to solve these problems have limitations with respect to the prediction of local buckling of face sheets and inner structures. In this work, a computationally efficient FE-analysis method incorporating a virtual equivalent projected model that enables the simulation of local buckling modes is newly introduced for analysis of metallic sandwich panels. Two-dimensional models using the projected shapes of 3D structures have the same equivalent elastic-plastic properties with original geometries that have anisotropic stiffness, yield strength, and hardening function. The sizes and isotropic properties of the virtual equivalent projected model have been estimated analytically with the same equivalent properties and face buckling strength of the full model. The 3-point bending processes with quasi-two-dimensional loads and boundary conditions are simulated to establish the validity of the proposed method. The deformed shapes and load-displacement curves of the virtual equivalent projected model are found to be almost the same as those of a full three-dimensional FE-analysis while reducing computational time drastically.

  14. Computationally Efficient Finite Element Analysis Method Incorporating Virtual Equivalent Projected Model For Metallic Sandwich Panels With Pyramidal Truss Cores

    SciTech Connect

    Seong, Dae-Yong; Jung, Chang Gyun; Yang, Dong-Yol

    2007-05-17

    Metallic sandwich panels composed of two face sheets and cores with low relative density have lightweight characteristics and various static and dynamic load bearing functions. To predict the forming characteristics, performance, and formability of these structured materials, full 3D modeling and analysis involving tremendous computational time and memory are required. Some constitutive continuum models including homogenization approaches to solve these problems have limitations with respect to the prediction of local buckling of face sheets and inner structures. In this work, a computationally efficient FE-analysis method incorporating a virtual equivalent projected model that enables the simulation of local buckling modes is newly introduced for analysis of metallic sandwich panels. Two-dimensional models using the projected shapes of 3D structures have the same equivalent elastic-plastic properties with original geometries that have anisotropic stiffness, yield strength, and hardening function. The sizes and isotropic properties of the virtual equivalent projected model have been estimated analytically with the same equivalent properties and face buckling strength of the full model. The 3-point bending processes with quasi-two-dimensional loads and boundary conditions are simulated to establish the validity of the proposed method. The deformed shapes and load-displacement curves of the virtual equivalent projected model are found to be almost the same as those of a full three-dimensional FE-analysis while reducing computational time drastically.

  15. Discrete Element Modeling

    SciTech Connect

    Morris, J; Johnson, S

    2007-12-03

    The Distinct Element Method (also frequently referred to as the Discrete Element Method) (DEM) is a Lagrangian numerical technique where the computational domain consists of discrete solid elements which interact via compliant contacts. This can be contrasted with Finite Element Methods where the computational domain is assumed to represent a continuum (although many modern implementations of the FEM can accommodate some Distinct Element capabilities). Often the terms Discrete Element Method and Distinct Element Method are used interchangeably in the literature, although Cundall and Hart (1992) suggested that Discrete Element Methods should be a more inclusive term covering Distinct Element Methods, Displacement Discontinuity Analysis and Modal Methods. In this work, DEM specifically refers to the Distinct Element Method, where the discrete elements interact via compliant contacts, in contrast with Displacement Discontinuity Analysis where the contacts are rigid and all compliance is taken up by the adjacent intact material.

  16. Prediction of acoustic radiation from axisymmetric surfaces with arbitrary boundary conditions using the boundary element method on a distributed computing system.

    PubMed

    Wright, Louise; Robinson, Stephen P; Humphrey, Victor F

    2009-03-01

    This paper presents a computational technique using the boundary element method for prediction of radiated acoustic waves from axisymmetric surfaces with nonaxisymmetric boundary conditions. The aim is to predict the far-field behavior of underwater acoustic transducers based on their measured behavior in the near-field. The technique is valid for all wavenumbers and uses a volume integral method to calculate the singular integrals required by the boundary element formulation. The technique has been implemented on a distributed computing system to take advantage of its parallel nature, which has led to significant reductions in the time required to generate results. Measurement data generated by a pair of free-flooding underwater acoustic transducers encapsulated in a polyurethane polymer have been used to validate the technique against experiment. The dimensions of the outer surface of the transducers (including the polymer coating) were an outer diameter of 98 mm with an 18 mm wall thickness and a length of 92 mm. The transducers were mounted coaxially, giving an overall length of 185 mm. The cylinders had resonance frequencies at 13.9 and 27.5 kHz, and the data were gathered at these frequencies.

  17. TURTLE with MAD input (Trace Unlimited Rays Through Lumped Elements) -- A computer program for simulating charged particle beam transport systems and DECAY TURTLE including decay calculations

    SciTech Connect

    Carey, D.C.

    1999-12-09

    TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE.

  18. A new finite element formulation for computational fluid dynamics. X - The compressible Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Shakib, Farzin; Hughes, Thomas J. R.; Johan, Zdenek

    1991-01-01

    A space-time element method is presented for solving the compressible Euler and Navier-Stokes equations. The proposed formulation includes the variational equation, predictor multi-corrector algorithms and boundary conditions. The variational equation is based on the time-discontinuous Galerkin method, in which the physical entropy variables are employed. A least-squares operator and a discontinuity-capturing operator are added, resulting in a high-order accurate and unconditionally stable method. Implicit/explicit predictor multi-corrector algorithms, applicable to steady as well as unsteady problems, are presented; techniques are developed to enhance their efficiency. Implementation of boundary conditions is addressed; in particular, a technique is introduced to satisfy nonlinear essential boundary conditions, and a consistent method is presented to calculate boundary fluxes. Numerical results are presented to demonstrate the performance of the method.

  19. Computational simulation of the bone remodeling using the finite element method: an elastic-damage theory for small displacements

    PubMed Central

    2013-01-01

    Background The resistance of the bone against damage by repairing itself and adapting to environmental conditions is its most important property. These adaptive changes are regulated by physiological process commonly called the bone remodeling. Better understanding this process requires that we apply the theory of elastic-damage under the hypothesis of small displacements to a bone structure and see its mechanical behavior. Results The purpose of the present study is to simulate a two dimensional model of a proximal femur by taking into consideration elastic-damage and mechanical stimulus. Here, we present a mathematical model based on a system of nonlinear ordinary differential equations and we develop the variational formulation for the mechanical problem. Then, we implement our mathematical model into the finite element method algorithm to investigate the effect of the damage. Conclusion The results are consistent with the existing literature which shows that the bone stiffness drops in damaged bone structure under mechanical loading. PMID:23663260

  20. Ergonomic aspects of portable personal computers with flat panel displays (PC-FPDs): evaluation of posture, muscle activities, discomfort and performance.

    PubMed

    Villanueva, M B; Jonai, H; Saito, S

    1998-07-01

    The advent of compact and lightweight portable personal computers has offered its users mobility. Various sizes of PC-FPDs can now be seen in the occupational setting as an alternative to the desktop computers. However, the increasing popularity of this relatively new technology may not be without any accompanying problems. The present study was designed to evaluate the use of PC-FPDs in terms of postural changes, muscle load, subjective complaints and performance of the subjects. Ten subjects, 5 males and 5 females, were asked to perform a text-entry task for 5 minutes using each of the 5 types of personal computers--1 desktop and 4 PC-FPDs of various sizes. Results showed that the posture assumed by the subjects while using the PC-FPDs was significantly more constrained than that assumed during work with the desktop computer. Viewing and neck angles progressively lowered and the trunk became more forward inclined. The EMG results also revealed that the activities of the neck extensor in PC-FPDs were significantly higher than in the desktop computers. Trends of increasing discomfort and difficulty of keying with the use of smaller PC-FPDs were noted. Performance was significantly lower for smaller PC-FPDs. This study shows that PC-FPDs have ergonomic attributes different from the desktop computer. An ergonomic guideline specific for PC-FPDs users is needed to prevent the surge in health disorders previously seen among desktop computer users.

  1. A computational technique to optimally design in-situ diffractive elements: applications to projection lithography at the resist resolution limit

    NASA Astrophysics Data System (ADS)

    Feijóo, Gonzalo R.; Tirapu-Azpiroz, Jaione; Rosenbluth, Alan E.; Oberai, Assad A.; Jagalur Mohan, Jayanth; Tian, Kehan; Melville, David; Gil, Dario; Lai, Kafai

    2009-03-01

    Near-field interference lithography is a promising variant of multiple patterning in semiconductor device fabrication that can potentially extend lithographic resolution beyond the current materials-based restrictions on the Rayleigh resolution of projection systems. With H2O as the immersion medium, non-evanescent propagation and optical design margins limit achievable pitch to approximately 0.53λ/nH2O = 0.37λ. Non-evanescent images are constrained only by the comparatively large resist indices (typically1.7) to a pitch resolution of 0.5/nresist (typically 0.29). Near-field patterning can potentially exploit evanescent waves and thus achieve higher spatial resolutions. Customized near-field images can be achieved through the modulation of an incoming wavefront by what is essentially an in-situ hologram that has been formed in an upper layer during an initial patterned exposure. Contrast Enhancement Layer (CEL) techniques and Talbot near-field interferometry can be considered special cases of this approach. Since the technique relies on near-field interference effects to produce the required pattern on the resist, the shape of the grating and the design of the film stack play a significant role on the outcome. As a result, it is necessary to resort to full diffraction computations to properly simulate and optimize this process. The next logical advance for this technology is to systematically design the hologram and the incident wavefront which is generated from a reduction mask. This task is naturally posed as an optimization problem, where the goal is to find the set of geometric and incident wavefront parameters that yields the closest fit to a desired pattern in the resist. As the pattern becomes more complex, the number of design parameters grows, and the computational problem becomes intractable (particularly in three-dimensions) without the use of advanced numerical techniques. To treat this problem effectively, specialized numerical methods have been

  2. Advanced computer technology - An aspect of the Terminal Configured Vehicle program. [air transportation capacity, productivity, all-weather reliability and noise reduction improvements

    NASA Technical Reports Server (NTRS)

    Berkstresser, B. K.

    1975-01-01

    NASA is conducting a Terminal Configured Vehicle program to provide improvements in the air transportation system such as increased system capacity and productivity, increased all-weather reliability, and reduced noise. A typical jet transport has been equipped with highly flexible digital display and automatic control equipment to study operational techniques for conventional takeoff and landing aircraft. The present airborne computer capability of this aircraft employs a multiple computer simple redundancy concept. The next step is to proceed from this concept to a reconfigurable computer system which can degrade gracefully in the event of a failure, adjust critical computations to remaining capacity, and reorder itself, in the case of transients, to the highest order of redundancy and reliability.

  3. Combined magnetic vector-scalar potential finite element computation of 3D magnetic field and performance of modified Lundell alternators in Space Station applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Ren H.

    1991-01-01

    A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.

  4. Combined magnetic vector-scalar potential finite element computation of 3D magnetic field and performance of modified Lundell alternators in Space Station applications

    NASA Astrophysics Data System (ADS)

    Wang, Ren H.

    1991-02-01

    A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.

  5. Progressive Damage Analysis of Laminated Composite (PDALC)-A Computational Model Implemented in the NASA COMET Finite Element Code

    NASA Technical Reports Server (NTRS)

    Lo, David C.; Coats, Timothy W.; Harris, Charles E.; Allen, David H.

    1996-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete list of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occur during the load history. Residual strength predictions made with this information compared favorably with experimental measurements.

  6. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  7. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    SciTech Connect

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  8. Properties and reactivity patterns of AsP(3): an experimental and computational study of group 15 elemental molecules.

    PubMed

    Cossairt, Brandi M; Cummins, Christopher C

    2009-10-28

    Facile synthetic access to the isolable, thermally robust AsP(3) molecule has allowed for a thorough study of its physical properties and reaction chemistry with a variety of transition-metal and organic fragments. The electronic properties of AsP(3) in comparison with P(4) are revealed by DFT and atoms in molecules (AIM) approaches and are discussed in relation to the observed electrochemical profiles and the phosphorus NMR properties of the two molecules. An investigation of the nucleus independent chemical shifts revealed that AsP(3) retains spherical aromaticity. The thermodynamic properties of AsP(3) and P(4) are described. The reaction types explored in this study include the thermal decomposition of the AsP(3) tetrahedron to its elements, the synthesis and structural characterization of [(AsP(3))FeCp*(dppe)][BPh(4)] (dppe = 1,2-bis(diphenylphosphino)ethane), 1, selective single As-P bond cleavage reactions, including the synthesis and structural characterization of AsP(3)(P(N((i)Pr)(2))N(SiMe(3))(2))(2), 2, and activations of AsP(3) by reactive early transition-metal fragments including Nb(H)(eta(2)-(t)Bu(H)C horizontal lineNAr)(N[CH(2)(t)Bu]Ar)(2) and Mo(N[(t)Bu]Ar)(3) (Ar = 3,5-Me(2)C(6)H(3)). In the presence of reducing equivalents, AsP(3) was found to allow access to [Na][E(3)Nb(ODipp)(3)] (Dipp = 2,6-diisopropylphenyl) complexes (E = As or P) which themselves allow access to mixtures of As(n)P(4-n) (n = 1-4).

  9. POTHMF: A program for computing potential curves and matrix elements of the coupled adiabatic radial equations for a hydrogen-like atom in a homogeneous magnetic field

    NASA Astrophysics Data System (ADS)

    Chuluunbaatar, O.; Gusev, A. A.; Gerdt, V. P.; Rostovtsev, V. A.; Vinitsky, S. I.; Abrashkevich, A. G.; Kaschiev, M. S.; Serov, V. V.

    2008-02-01

    A FORTRAN 77 program is presented which calculates with the relative machine precision potential curves and matrix elements of the coupled adiabatic radial equations for a hydrogen-like atom in a homogeneous magnetic field. The potential curves are eigenvalues corresponding to the angular oblate spheroidal functions that compose adiabatic basis which depends on the radial variable as a parameter. The matrix elements of radial coupling are integrals in angular variables of the following two types: product of angular functions and the first derivative of angular functions in parameter, and product of the first derivatives of angular functions in parameter, respectively. The program calculates also the angular part of the dipole transition matrix elements (in the length form) expressed as integrals in angular variables involving product of a dipole operator and angular functions. Moreover, the program calculates asymptotic regular and irregular matrix solutions of the coupled adiabatic radial equations at the end of interval in radial variable needed for solving a multi-channel scattering problem by the generalized R-matrix method. Potential curves and radial matrix elements computed by the POTHMF program can be used for solving the bound state and multi-channel scattering problems. As a test desk, the program is applied to the calculation of the energy values, a short-range reaction matrix and corresponding wave functions with the help of the KANTBP program. Benchmark calculations for the known photoionization cross-sections are presented. Program summaryProgram title:POTHMF Catalogue identifier:AEAA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAA_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:8123 No. of bytes in distributed program, including test data

  10. NORIA-SP: A finite element computer program for analyzing liquid water transport in porous media; Yucca Mountain Site Characterization Project

    SciTech Connect

    Hopkins, P.L.; Eaton, R.R.; Bixler, N.E.

    1991-12-01

    A family of finite element computer programs has been developed at Sandia National Laboratories (SNL) most recently, NORIA-SP. The original NORIA code solves a total of four transport equations simultaneously: liquid water, water vapor, air, and energy. Consequently, use of NORIA is computer-intensive. Since many of the applications for which NORIA is used are isothermal, we decided to ``strip`` the original four-equation version, leaving only the liquid water equation. This single-phase version is NORIA-SP. The primary intent of this document is to provide the user of NORIA-SP an accurate user`s manual. Consequently, the reader should refer to the NORIA manual if additional detail is required regarding the equation development and finite element methods used. The single-equation version of the NORIA code (NORIA-SP) has been used most frequently for analyzing various hydrological scenarios for the potential underground nuclear waste repository at Yucca Mountain in western Nevada. These analyses are generally performed assuming a composite model to represent the fractured geologic media. In this model the material characteristics of the matrix and the fractures are area weighted to obtain equivalent material properties. Pressure equilibrium between the matrix and fractures is assumed so a single conservation equation can be solved. NORIA-SP is structured to accommodate the composite model. The equations for water velocities in both the rock matrix and the fractures are presented. To use the code for problems involving a single, nonfractured porous material, the user can simply set the area of the fractures to zero.

  11. Material Characterization and Geometric Segmentation of a Composite Structure Using Microfocus X-Ray Computed Tomography Image-Based Finite Element Modeling

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Roth, D. J.; Cotton, R.; Studor, George F.; Christiansen, Eric; Young, P. C.

    2011-01-01

    This study utilizes microfocus x-ray computed tomography (CT) slice sets to model and characterize the damage locations and sizes in thermal protection system materials that underwent impact testing. ScanIP/FE software is used to visualize and process the slice sets, followed by mesh generation on the segmented volumetric rendering. Then, the local stress fields around several of the damaged regions are calculated for realistic mission profiles that subject the sample to extreme temperature and other severe environmental conditions. The resulting stress fields are used to quantify damage severity and make an assessment as to whether damage that did not penetrate to the base material can still result in catastrophic failure of the structure. It is expected that this study will demonstrate that finite element modeling based on an accurate three-dimensional rendered model from a series of CT slices is an essential tool to quantify the internal macroscopic defects and damage of a complex system made out of thermal protection material. Results obtained showing details of segmented images; three-dimensional volume-rendered models, finite element meshes generated, and the resulting thermomechanical stress state due to impact loading for the material are presented and discussed. Further, this study is conducted to exhibit certain high-caliber capabilities that the nondestructive evaluation (NDE) group at NASA Glenn Research Center can offer to assist in assessing the structural durability of such highly specialized materials so improvements in their performance and capacities to handle harsh operating conditions can be made.

  12. Development and validation of a computational finite element model of the rabbit upper airway: simulations of mandibular advancement and tracheal displacement.

    PubMed

    Amatoury, Jason; Cheng, Shaokoon; Kairaitis, Kristina; Wheatley, John R; Amis, Terence C; Bilston, Lynne E

    2016-04-01

    The mechanisms leading to upper airway (UA) collapse during sleep are complex and poorly understood. We previously developed an anesthetized rabbit model for studying UA physiology. On the basis of this body of physiological data, we aimed to develop and validate a two-dimensional (2D) computational finite element model (FEM) of the passive rabbit UA and peripharyngeal tissues. Model geometry was reconstructed from a midsagittal computed tomographic image of a representative New Zealand White rabbit, which included major soft (tongue, soft palate, constrictor muscles), cartilaginous (epiglottis, thyroid cartilage), and bony pharyngeal tissues (mandible, hard palate, hyoid bone). Other UA muscles were modeled as linear elastic connections. Initial boundary and contact definitions were defined from anatomy and material properties derived from the literature. Model parameters were optimized to physiological data sets associated with mandibular advancement (MA) and caudal tracheal displacement (TD), including hyoid displacement, which featured with both applied loads. The model was then validated against independent data sets involving combined MA and TD. Model outputs included UA lumen geometry, peripharyngeal tissue displacement, and stress and strain distributions. Simulated MA and TD resulted in UA enlargement and nonuniform increases in tissue displacement, and stress and strain. Model predictions closely agreed with experimental data for individually applied MA, TD, and their combination. We have developed and validated an FEM of the rabbit UA that predicts UA geometry and peripharyngeal tissue mechanical changes associated with interventions known to improve UA patency. The model has the potential to advance our understanding of UA physiology and peripharyngeal tissue mechanics.

  13. Flutter: A finite element program for aerodynamic instability analysis of general shells of revolution with thermal prestress

    NASA Technical Reports Server (NTRS)

    Fallon, D. J.; Thornton, E. A.

    1983-01-01

    Documentation for the computer program FLUTTER is presented. The theory of aerodynamic instability with thermal prestress is discussed. Theoretical aspects of the finite element matrices required in the aerodynamic instability analysis are also discussed. General organization of the computer program is explained, and instructions are then presented for the execution of the program.

  14. Three-Dimensional Finite Element Magnetic Field Computations and Performance Simulation of Brushless DC Motor Drives with Skewed Permanent Magnet Mounts.

    NASA Astrophysics Data System (ADS)

    Alhamadi, Mohd A. Wahed

    1992-01-01

    A three dimensional finite element (3D-FE) method for the computation of global distributions of 30 magnetic fields in electric machines containing permanent magnets is presented. The formulation of this 3D-FE method is based on a coupled magnetic vector potential - magnetic scalar potential (CMVP-MSP) approach. In this CMVP-MSP method, the modeling and formulations of permanent magnet volumes, suited to first and second order MVP 3D-FE environments as well as first order MSP 3D-FE environment, are developed in this dissertation. The development of the necessary 3D-FE grids and algorithms for the application of the CMVP -MSP method to an example brushless dc motor, whose field is three dimensional due to the skewed permanent magnet mounts on its rotor, is also given here. It should be mentioned that the entire volume of the case-study machine from one end to another is considered in the global magnetic field computations. A complete set of results of application of the CMVP-MSP method to the computation of the global 3D field distributions and associated motor parameters under no-load and load conditions are presented in this dissertation. In addition, a complete simulation of the dynamic performance of the motor drive system using the parameters obtained from the 3D-FE field solutions are presented for no-load and various other load conditions. All the above mentioned results are experimentally verified by corresponding oscillograms obtained in the laboratory. These results are also compared with results obtained from motor parameters based on various 2D-FE approaches, showing that for certain types of skewed permanent magnet mounts, 3D-FE based parameters can make significant qualitative and quantitative improvements in motor-drive simulation results.

  15. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic

  16. Synthesis, spectroscopic, cytotoxic aspects and computational study of N-(pyridine-2-ylmethylene)benzo[d]thiazol-2-amine Schiff base and some of its transition metal complexes

    NASA Astrophysics Data System (ADS)

    Abd El-Aziz, Dina M.; Etaiw, Safaa Eldin H.; Ali, Elham A.

    2013-09-01

    N-(pyridine-2-ylmethylene)benzo[d]thiazol-2-amine Schiff base (L) and its Cu(II), Fe(III), Co(II), Ni(II) and Zn(II) complexes were synthesized and characterized by a set of chemical and spectroscopic measurements using elemental analysis, electrical conductance, mass spectra, magnetic susceptibility and spectral techniques (IR, UV-Vis, 1H NMR). Elemental and mass spectrometric data are consistent with the proposed formula. IR spectra confirm the bidentate nature of the Schiff base ligand. The octahedral geometry around Cu(II), Fe(III), Ni(II) and Zn(II) as well as tetrahedral geometry around Co(II) were suggested by UV-Vis spectra and magnetic moment data. The thermal degradation behavior of the Schiff base and its complexes was investigated by thermogravimetric analysis. The structure of the Schiff base and its transition metal complexes was also theoretically studied using molecular mechanics (MM+). The obtained structures were minimized with a semi-empirical (PM3) method. The in vitro antitumor activity of the synthesized compounds was studied. The Zn-complex exhibits significant decrease in surviving fraction of breast carcinoma (MCF 7), liver carcinoma (HEPG2), colon carcinoma (HCT116) and larynx carcinoma (HEP2) cell lines human cancer.

  17. Regulatory aspects

    NASA Astrophysics Data System (ADS)

    Stern, Arthur M.

    1986-07-01

    At this time, there is no US legislation that is specifically aimed at regulating the environmental release of genetically engineered organisms or their modified components, either during the research and development stage or during application. There are some statutes, administered by several federal agencies, whose language is broad enough to allow the extension of intended coverage to include certain aspects of biotechnology. The one possible exception is FIFRA, which has already brought about the registration of several natural microbial pesticides but which also has provision for requiring the registration of “strain improved” microbial pesticides. Nevertheless, there may be gaps in coverage even if all pertinent statutes were to be actively applied to the control of environmental release of genetically modified substances. The decision to regulate biotechnology under TSCA was justified, in part, on the basis of its intended role as a gap-filling piece of environmental legislation. The advantage of regulating biotechnology under TSCA is that this statute, unlike others, is concerned with all media of exposure (air, water, soil, sediment, biota) that may pose health and environmental hazards. Experience may show that extending existing legislation to regulate biotechnology is a poor compromise compared to the promulgation of new legislation specifically designed for this purpose. It appears that many other countries are ultimately going to take the latter course to regulate biotechnology.

  18. Patient-specific finite element modeling of bones.

    PubMed

    Poelert, Sander; Valstar, Edward; Weinans, Harrie; Zadpoor, Amir A

    2013-04-01

    Finite element modeling is an engineering tool for structural analysis that has been used for many years to assess the relationship between load transfer and bone morphology and to optimize the design and fixation of orthopedic implants. Due to recent developments in finite element model generation, for example, improved computed tomography imaging quality, improved segmentation algorithms, and faster computers, the accuracy of finite element modeling has increased vastly and finite element models simulating the anatomy and properties of an individual patient can be constructed. Such so-called patient-specific finite element models are potentially valuable tools for orthopedic surgeons in fracture risk assessment or pre- and intraoperative planning of implant placement. The aim of this article is to provide a critical overview of current themes in patient-specific finite element modeling of bones. In addition, the state-of-the-art in patient-specific modeling of bones is compared with the requirements for a clinically applicable patient-specific finite element method, and judgment is passed on the feasibility of application of patient-specific finite element modeling as a part of clinical orthopedic routine. It is concluded that further development in certain aspects of patient-specific finite element modeling are needed before finite element modeling can be used as a routine clinical tool.

  19. Recent advances in the application of computer-controlled optical finishing to produce very high-quality transmissive optical elements and windows

    NASA Astrophysics Data System (ADS)

    Askinazi, Joel; Estrin, Aleksandr; Green, Alan; Turner, Aaron N.

    2003-09-01

    Large aperture (20-inch diameter) sapphire optical windows have been identified as a key element of new and/or upgraded airborne electro-optical systems. These windows typically require a transmitted wave front error of much less than 0.1 waves rms @ 0.63 microns over 7 inch diameter sub-apertures. Large aperture (14-inch diameter by 4-inch thick) sapphire substrates have also been identified as a key optical element of the Laser Interferometer Gravitational Wave Observatory (LIGO). This project is under joint development by the California Institute of Technology (Caltech) and the Massachusetts Institute of Technology under cooperative agreement with the National Science foundation (NSF). These substrates are required to have a transmitted wave front error of 20 nm (0.032 waves) rms @ 0.63 microns over 6-inch sub-apertures with a desired error of 10 nm (0.016 waves) rms. Owing to the spatial variations in the optical index of refraction potentially anticipated within 20-inch diameter sapphire, thin (0.25 - 0.5-inch) window substrates, as well as within the 14-inch diameter by 4-inch thick substrates for the LIGO application, our experience tells us that the required transmitted wave front errors can not be achieved with standard optical finishing techniques as they can not readily compensate for errors introduced by inherent material characteristics. Computer controlled optical finishing has been identified as a key technology likely required to enable achievement of the required transmitted wave front errors. Goodrich has developed this technology and has previously applied it to finish high quality sapphire optical windows with a range of aperture sizes from 4-inch to 13-inch to achieve transmitted wavefront errors comparable to these new requirements. This paper addresses successful recent developments and accomplishments in the application of this optical finishing technology to sequentially larger aperture and thicker sapphire windows to achieve the

  20. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  1. A hyperelastic biphasic fibre-reinforced model of articular cartilage considering distributed collagen fibre orientations: continuum basis, computational aspects and applications.

    PubMed

    Pierce, David M; Ricken, Tim; Holzapfel, Gerhard A

    2013-01-01

    Cartilage is a multi-phase material composed of fluid and electrolytes (68-85% by wet weight), proteoglycans (5-10% by wet weight), chondrocytes, collagen fibres and other glycoproteins. The solid phase constitutes an isotropic proteoglycan gel and a fibre network of predominantly type II collagen, which provides tensile strength and mechanical stiffness. The same two components control diffusion of the fluid phase, e.g. as visualised by diffusion tensor MRI: (i) the proteoglycan gel (giving a baseline isotropic diffusivity) and (ii) the highly anisotropic collagenous fibre network. We propose a new constitutive model and finite element implementation that focus on the essential load-bearing morphology: an incompressible, poroelastic solid matrix reinforced by an inhomogeneous, dispersed fibre fabric, which is saturated with an incompressible fluid residing in strain-dependent pores of the collagen-proteoglycan solid matrix. The inhomogeneous, dispersed fibre fabric of the solid further influences the fluid permeability, as well as an intrafibrillar portion that cannot be 'squeezed out' from the tissue. Using representative numerical examples on the mechanical response of cartilage, we reproduce several features that have been demonstrated experimentally in the cartilage mechanics literature.

  2. Computer Lab Configuration.

    ERIC Educational Resources Information Center

    Wodarz, Nan

    2003-01-01

    Describes the layout and elements of an effective school computer lab. Includes configuration, storage spaces, cabling and electrical requirements, lighting, furniture, and computer hardware and peripherals. (PKP)

  3. Identifying Structure-Property Relationships Through DREAM.3D Representative Volume Elements and DAMASK Crystal Plasticity Simulations: An Integrated Computational Materials Engineering Approach

    NASA Astrophysics Data System (ADS)

    Diehl, Martin; Groeber, Michael; Haase, Christian; Molodov, Dmitri A.; Roters, Franz; Raabe, Dierk

    2017-03-01

    Predicting, understanding, and controlling the mechanical behavior is the most important task when designing structural materials. Modern alloy systems—in which multiple deformation mechanisms, phases, and defects are introduced to overcome the inverse strength-ductility relationship—give raise to multiple possibilities for modifying the deformation behavior, rendering traditional, exclusively experimentally-based alloy development workflows inappropriate. For fast and efficient alloy design, it is therefore desirable to predict the mechanical performance of candidate alloys by simulation studies to replace time- and resource-consuming mechanical tests. Simulation tools suitable for this task need to correctly predict the mechanical behavior in dependence of alloy composition, microstructure, texture, phase fractions, and processing history. Here, an integrated computational materials engineering approach based on the open source software packages DREAM.3D and DAMASK (Düsseldorf Advanced Materials Simulation Kit) that enables such virtual material development is presented. More specific, our approach consists of the following three steps: (1) acquire statistical quantities that describe a microstructure, (2) build a representative volume element based on these quantities employing DREAM.3D, and (3) evaluate the representative volume using a predictive crystal plasticity material model provided by DAMASK. Exemplarily, these steps are here conducted for a high-manganese steel.

  4. VIBA-Lab 3.0: Computer program for simulation and semi-quantitative analysis of PIXE and RBS spectra and 2D elemental maps

    NASA Astrophysics Data System (ADS)

    Orlić, Ivica; Mekterović, Darko; Mekterović, Igor; Ivošević, Tatjana

    2015-11-01

    VIBA-Lab is a computer program originally developed by the author and co-workers at the National University of Singapore (NUS) as an interactive software package for simulation of Particle Induced X-ray Emission and Rutherford Backscattering Spectra. The original program is redeveloped to a VIBA-Lab 3.0 in which the user can perform semi-quantitative analysis by comparing simulated and measured spectra as well as simulate 2D elemental maps for a given 3D sample composition. The latest version has a new and more versatile user interface. It also has the latest data set of fundamental parameters such as Coster-Kronig transition rates, fluorescence yields, mass absorption coefficients and ionization cross sections for K and L lines in a wider energy range than the original program. Our short-term plan is to introduce routine for quantitative analysis for multiple PIXE and XRF excitations. VIBA-Lab is an excellent teaching tool for students and researchers in using PIXE and RBS techniques. At the same time the program helps when planning an experiment and when optimizing experimental parameters such as incident ions, their energy, detector specifications, filters, geometry, etc. By "running" a virtual experiment the user can test various scenarios until the optimal PIXE and BS spectra are obtained and in this way save a lot of expensive machine time.

  5. Curved Beam Computed Tomography based Structural Rigidity Analysis of Bones with Simulated Lytic Defect: A Comparative Study with Finite Element Analysis

    PubMed Central

    Oftadeh, R.; Karimi, Z.; Villa-Camacho, J.; Tanck, E.; Verdonschot, N.; Goebel, R.; Snyder, B. D.; Hashemi, H. N.; Vaziri, A.; Nazarian, A.

    2016-01-01

    In this paper, a CT based structural rigidity analysis (CTRA) method that incorporates bone intrinsic local curvature is introduced to assess the compressive failure load of human femur with simulated lytic defects. The proposed CTRA is based on a three dimensional curved beam theory to obtain critical stresses within the human femur model. To test the proposed method, ten human cadaveric femurs with and without simulated defects were mechanically tested under axial compression to failure. Quantitative computed tomography images were acquired from the samples, and CTRA and finite element analysis were performed to obtain the failure load as well as rigidities in both straight and curved cross sections. Experimental results were compared to the results obtained from FEA and CTRA. The failure loads predicated by curved beam CTRA and FEA are in agreement with experimental results. The results also show that the proposed method is an efficient and reliable method to find both the location and magnitude of failure load. Moreover, the results show that the proposed curved CTRA outperforms the regular straight beam CTRA, which ignores the bone intrinsic curvature and can be used as a useful tool in clinical practices. PMID:27585495

  6. Three-dimensional finite element analysis of unilateral mastication in malocclusion cases using cone-beam computed tomography and a motion capture system

    PubMed Central

    2016-01-01

    Purpose Stress distribution and mandible distortion during lateral movements are known to be closely linked to bruxism, dental implant placement, and temporomandibular joint disorder. The present study was performed to determine stress distribution and distortion patterns of the mandible during lateral movements in Class I, II, and III relationships. Methods Five Korean volunteers (one normal, two Class II, and two Class III occlusion cases) were selected. Finite element (FE) modeling was performed using information from cone-beam computed tomographic (CBCT) scans of the subjects’ skulls, scanned images of dental casts, and incisor movement captured by an optical motion-capture system. Results In the Class I and II cases, maximum stress load occurred at the condyle of the balancing side, but, in the Class III cases, the maximum stress was loaded on the condyle of the working side. Maximum distortion was observed on the menton at the midline in every case, regardless of loading force. The distortion was greatest in Class III cases and smallest in Class II cases. Conclusions The stress distribution along and accompanying distortion of a mandible seems to be affected by the anteroposterior position of the mandible. Additionally, 3-D modeling of the craniofacial skeleton using CBCT and an optical laser scanner and reproduction of mandibular movement by way of the optical motion-capture technique used in this study are reliable techniques for investigating the masticatory system. PMID:27127690

  7. Aspects of Plant Intelligence

    PubMed Central

    TREWAVAS, ANTHONY

    2003-01-01

    Intelligence is not a term commonly used when plants are discussed. However, I believe that this is an omission based not on a true assessment of the ability of plants to compute complex aspects of their environment, but solely a reflection of a sessile lifestyle. This article, which is admittedly controversial, attempts to raise many issues that surround this area. To commence use of the term intelligence with regard to plant behaviour will lead to a better understanding of the complexity of plant signal transduction and the discrimination and sensitivity with which plants construct images of their environment, and raises critical questions concerning how plants compute responses at the whole‐plant level. Approaches to investigating learning and memory in plants will also be considered. PMID:12740212

  8. Revolution in Orthodontics: Finite element analysis

    PubMed Central

    Singh, Johar Rajvinder; Kambalyal, Prabhuraj; Jain, Megha; Khandelwal, Piyush

    2016-01-01

    Engineering has not only developed in the field of medicine but has also become quite established in the field of dentistry, especially Orthodontics. Finite element analysis (FEA) is a computational procedure to calculate the stress in an element, which performs a model solution. This structural analysis allows the determination of stress resulting from external force, pressure, thermal change, and other factors. This method is extremely useful for indicating mechanical aspects of biomaterials and human tissues that can hardly be measured in vivo. The results obtained can then be studied using visualization software within the finite element method (FEM) to view a variety of parameters, and to fully identify implications of the analysis. This is a review to show the applications of FEM in Orthodontics. It is extremely important to verify what the purpose of the study is in order to correctly apply FEM. PMID:27114948

  9. The individual element test revisited

    NASA Technical Reports Server (NTRS)

    Militello, Carmelo; Felippa, Carlos A.

    1991-01-01

    The subject of the patch test for finite elements retains several unsettled aspects. In particular, the issue of one-element versus multielement tests needs clarification. Following a brief historical review, we present the individual element test (IET) of Bergan and Hanssen in an expanded context that encompasses several important classes of new elements. The relationship of the IET to the multielement forms A, B, and C of the patch test and to the single element test are clarified.

  10. Design of microstrip components by computer

    NASA Technical Reports Server (NTRS)

    Cisco, T. C.

    1972-01-01

    A number of computer programs are presented for use in the synthesis of microwave components in microstrip geometries. The programs compute the electrical and dimensional parameters required to synthesize couplers, filters, circulators, transformers, power splitters, diode switches, multipliers, diode attenuators and phase shifters. Additional programs are included to analyze and optimize cascaded transmission lines and lumped element networks, to analyze and synthesize Chebyshev and Butterworth filter prototypes, and to compute mixer intermodulation products. The programs are written in FORTRAN and the emphasis of the study is placed on the use of these programs and not on the theoretical aspects of the structures.

  11. Exercises in molecular computing.

    PubMed

    Stojanovic, Milan N; Stefanovic, Darko; Rudchenko, Sergei

    2014-06-17

    CONSPECTUS: The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word "computer" now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem-loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is

  12. RETSCP: A computer program for analysis of rocket engine thermal strains with cyclic plasticity

    NASA Technical Reports Server (NTRS)

    Miller, R. W.

    1974-01-01

    A computer program, designated RETSCP, for the analysis of Rocket Engine Thermal Strain with Cyclic Plasticity is described. RETSCP is a finite element program which employs a three dimensional isoparametric element. The program treats elasto-plastic strain cycling including the effects of thermal and pressure loads and temperature dependent material properties. Theoretical aspects of the finite element method are discussed and the program logic is described. A RETSCP User's Manual is presented including sample case results.

  13. Exercises in Molecular Computing

    PubMed Central

    2014-01-01

    Conspectus The successes of electronic digital logic have transformed every aspect of human life over the last half-century. The word “computer” now signifies a ubiquitous electronic device, rather than a human occupation. Yet evidently humans, large assemblies of molecules, can compute, and it has been a thrilling challenge to develop smaller, simpler, synthetic assemblies of molecules that can do useful computation. When we say that molecules compute, what we usually mean is that such molecules respond to certain inputs, for example, the presence or absence of other molecules, in a precisely defined but potentially complex fashion. The simplest way for a chemist to think about computing molecules is as sensors that can integrate the presence or absence of multiple analytes into a change in a single reporting property. Here we review several forms of molecular computing developed in our laboratories. When we began our work, combinatorial approaches to using DNA for computing were used to search for solutions to constraint satisfaction problems. We chose to work instead on logic circuits, building bottom-up from units based on catalytic nucleic acids, focusing on DNA secondary structures in the design of individual circuit elements, and reserving the combinatorial opportunities of DNA for the representation of multiple signals propagating in a large circuit. Such circuit design directly corresponds to the intuition about sensors transforming the detection of analytes into reporting properties. While this approach was unusual at the time, it has been adopted since by other groups working on biomolecular computing with different nucleic acid chemistries. We created logic gates by modularly combining deoxyribozymes (DNA-based enzymes cleaving or combining other oligonucleotides), in the role of reporting elements, with stem–loops as input detection elements. For instance, a deoxyribozyme that normally exhibits an oligonucleotide substrate recognition region is

  14. Connectivity Measures in EEG Microstructural Sleep Elements

    PubMed Central

    Sakellariou, Dimitris; Koupparis, Andreas M.; Kokkinos, Vasileios; Koutroumanidis, Michalis; Kostopoulos, George K.

    2016-01-01

    During Non-Rapid Eye Movement sleep (NREM) the brain is relatively disconnected from the environment, while connectedness between brain areas is also decreased. Evidence indicates, that these dynamic connectivity changes are delivered by microstructural elements of sleep: short periods of environmental stimuli evaluation followed by sleep promoting procedures. The connectivity patterns of the latter, among other aspects of sleep microstructure, are still to be fully elucidated. We suggest here a methodology for the assessment and investigation of the connectivity patterns of EEG microstructural elements, such as sleep spindles. The methodology combines techniques in the preprocessing, estimation, error assessing and visualization of results levels in order to allow the detailed examination of the connectivity aspects (levels and directionality of information flow) over frequency and time with notable resolution, while dealing with the volume conduction and EEG reference assessment. The high temporal and frequency resolution of the methodology will allow the association between the microelements and the dynamically forming networks that characterize them, and consequently possibly reveal aspects of the EEG microstructure. The proposed methodology is initially tested on artificially generated signals for proof of concept and subsequently applied to real EEG recordings via a custom built MATLAB-based tool developed for such studies. Preliminary results from 843 fast sleep spindles recorded in whole night sleep of 5 healthy volunteers indicate a prevailing pattern of interactions between centroparietal and frontal regions. We demonstrate hereby, an opening to our knowledge attempt to estimate the scalp EEG connectivity that characterizes fast sleep spindles via an “EEG-element connectivity” methodology we propose. The application of the latter, via a computational tool we developed suggests it is able to investigate the connectivity patterns related to the

  15. Chemistry of superheavy elements.

    PubMed

    Schädel, Matthias

    2006-01-09

    The number of chemical elements has increased considerably in the last few decades. Most excitingly, these heaviest, man-made elements at the far-end of the Periodic Table are located in the area of the long-awaited superheavy elements. While physical techniques currently play a leading role in these discoveries, the chemistry of superheavy elements is now beginning to be developed. Advanced and very sensitive techniques allow the chemical properties of these elusive elements to be probed. Often, less than ten short-lived atoms, chemically separated one-atom-at-a-time, provide crucial information on basic chemical properties. These results place the architecture of the far-end of the Periodic Table on the test bench and probe the increasingly strong relativistic effects that influence the chemical properties there. This review is focused mainly on the experimental work on superheavy element chemistry. It contains a short contribution on relativistic theory, and some important historical and nuclear aspects.

  16. Subversion: The Neglected Aspect of Computer Security.

    DTIC Science & Technology

    1980-06-01

    it was a ’C’ lanpuaze program that executed one command lan.uare (called ’shell’) statement. Since this propram was in the games directory, the... propram (which has access to the sensitive data is sendine a binary 𔃻 if the service opens the given file for readin,. This is because he would be...Mass. . I,’ 11. 23. Department of Defense Industrial Security Nevsletter no. ei-1, 28 March 198e. 24. Department of Defense Information Security Propram

  17. Designing a perfect cornea: computational aspects

    NASA Astrophysics Data System (ADS)

    Rubinstein, Jacob; Wolansky, Gershon

    2002-12-01

    We analyze an algorithm for the design of a perfect cornea that exactly focuses a preselected object or a preselected wave front on the retina. The algorithm can be used, for example, in refractive surgery. We consider the sensitivity of the algorithm to various errors, including errors in the measurements of the aberrations, the original corneal topography and the ablation process.

  18. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  19. Optimal mapping of irregular finite element domains to parallel processors

    NASA Technical Reports Server (NTRS)

    Flower, J.; Otto, S.; Salama, M.

    1987-01-01

    Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.

  20. Aspects, Wrappers and Events

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.

    2003-01-01

    This viewgraph presentation provides information on Object Infrastructure Framework (OIF), an Aspect-Oriented Programming (AOP) system. The presentation begins with an introduction to the difficulties and requirements of distributed computing, including functional and non-functional requirements (ilities). The architecture of Distributed Object Technology includes stubs, proxies for implementation objects, and skeletons, proxies for client applications. The key OIF ideas (injecting behavior, annotated communications, thread contexts, and pragma) are discussed. OIF is an AOP mechanism; AOP is centered on: 1) Separate expression of crosscutting concerns; 2) Mechanisms to weave the separate expressions into a unified system. AOP is software engineering technology for separately expressing systematic properties while nevertheless producing running systems that embody these properties.

  1. Polymorphic nodal elements and their application in discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Gassner, Gregor J.; Lörcher, Frieder; Munz, Claus-Dieter; Hesthaven, Jan S.

    2009-03-01

    In this work, we discuss two different but related aspects of the development of efficient discontinuous Galerkin methods on hybrid element grids for the computational modeling of gas dynamics in complex geometries or with adapted grids. In the first part, a recursive construction of different nodal sets for hp finite elements is presented. They share the property that the nodes along the sides of the two-dimensional elements and along the edges of the three-dimensional elements are the Legendre-Gauss-Lobatto points. The different nodal elements are evaluated by computing the Lebesgue constants of the corresponding Vandermonde matrix. In the second part, these nodal elements are applied within the modal discontinuous Galerkin framework. We still use a modal based formulation, but introduce a nodal based integration technique to reduce computational cost in the spirit of pseudospectral methods. We illustrate the performance of the scheme on several large scale applications and discuss its use in a recently developed space-time expansion discontinuous Galerkin scheme.

  2. Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Thomas, James

    2008-01-01

    Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications

  3. Assignment Of Finite Elements To Parallel Processors

    NASA Technical Reports Server (NTRS)

    Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.

    1990-01-01

    Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.

  4. Aspect-Oriented Subprogram Synthesizes UML Sequence Diagrams

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2006-01-01

    The Rational Sequence computer program described elsewhere includes a subprogram that utilizes the capability for aspect-oriented programming when that capability is present. This subprogram is denoted the Rational Sequence (AspectJ) component because it uses AspectJ, which is an extension of the Java programming language that introduces aspect-oriented programming techniques into the language

  5. Trends in Computational Science Education

    NASA Astrophysics Data System (ADS)

    Landau, Rubin

    2002-08-01

    Education in computational science and engineering (CSE) has evolved through a number of stages, from recognition in the 1980s to its present early growth. Now a number of courses and degree programs are being designed and implemented at both the graduate and undergraduate levels, and students are beginning to receive degrees. This talk will discuss various aspects of this development, including the impact on faculty and students, the nature of the job market, the intellectual content of CSE education, and the types of programs and degrees now being offered. Analytic comparisons will be made between the content of Physics degrees versus those of other disciplines, and reasons for changes should be apparent. This talk is based on the papers "Elements of Computational Science Education" by Osman Yasar and Rubin Landau, and "Computational Science Education" by Charles Swanson.

  6. Dedicated finite elements for electrode thin films on quartz resonators.

    PubMed

    Srivastava, Sonal A; Yong, Yook-Kong; Tanaka, Masako; Imai, Tsutomu

    2008-08-01

    The accuracy of the finite element analysis for thickness shear quartz resonators is a function of the mesh resolution; the finer the mesh resolution, the more accurate the finite element solution. A certain minimum number of elements are required in each direction for the solution to converge. This places a high demand on memory for computation, and often the available memory is insufficient. Typically the thickness of the electrode films is very small compared with the thickness of the resonator itself; as a result, electrode elements have very poor aspect ratios, and this is detrimental to the accuracy of the result. In this paper, we propose special methods to model the electrodes at the crystal interface of an AT cut crystal. This reduces the overall problem size and eliminates electrode elements having poor aspect ratios. First, experimental data are presented to demonstrate the effects of electrode film boundary conditions on the frequency-temperature curves of an AT cut plate. Finite element analysis is performed on a mesh representing the resonator, and the results are compared for testing the accuracy of the analysis itself and thus validating the results of analysis. Approximations such as lumping and Guyan reduction are then used to model the electrode thin films at the electrode interface and their results are studied. In addition, a new approximation called merging is proposed to model electrodes at the electrode interface.

  7. Thermodynamic aspects of therapeutic hypothermia.

    PubMed

    Vanlandingham, Sean C; Kurz, Michael C; Wang, Henry E

    2015-01-01

    Therapeutic hypothermia (TH) is an important treatment for post-cardiac arrest syndrome. Despite its widespread practice, only limited data describe the thermodynamic aspects of heat transfer during TH. This paper reviews the principles of human body heat balance and provides a conceptual model for characterizing heat exchange during TH. The model may provide a framework for computer simulation for improving training in or clinical methods of TH.

  8. Legal aspects of satellite teleconferencing

    NASA Technical Reports Server (NTRS)

    Smith, D. D.

    1971-01-01

    The application of satellite communications for teleconferencing purposes is discussed. The legal framework within which such a system or series of systems could be developed is considered. The analysis is based on: (1) satellite teleconferencing regulation, (2) the options available for such a system, (3) regulatory alternatives, and (4) ownership and management aspects. The system is designed to provide a capability for professional education, remote medical diagnosis, business conferences, and computer techniques.

  9. Cloud-free resolution element statistics program

    NASA Technical Reports Server (NTRS)

    Liley, B.; Martin, C. D.

    1971-01-01

    Computer program computes number of cloud-free elements in field-of-view and percentage of total field-of-view occupied by clouds. Human error is eliminated by using visual estimation to compute cloud statistics from aerial photographs.

  10. Defining Elemental Imitation Mechanisms: A Comparison of Cognitive and Motor-Spatial Imitation Learning across Object- and Computer-Based Tasks

    ERIC Educational Resources Information Center

    Subiaul, Francys; Zimmermann, Laura; Renner, Elizabeth; Schilder, Brian; Barr, Rachel

    2016-01-01

    During the first 5 years of life, the versatility, breadth, and fidelity with which children imitate change dramatically. Currently, there is no model to explain what underlies such significant changes. To that end, the present study examined whether task-independent but domain-specific--elemental--imitation mechanism explains performance across…

  11. French Computer Terminology.

    ERIC Educational Resources Information Center

    Gray, Eugene F.

    1985-01-01

    Characteristics, idiosyncrasies, borrowings, and other aspects of the French terminology for computers and computer-related matters are discussed and placed in the context of French computer use. A glossary provides French equivalent terms or translations of English computer terminology. (MSE)

  12. Proceedings of transuranium elements

    SciTech Connect

    Not Available

    1992-01-01

    The identification of the first synthetic elements was established by chemical evidence. Conclusive proof of the synthesis of the first artificial element, technetium, was published in 1937 by Perrier and Segre. An essential aspect of their achievement was the prediction of the chemical properties of element 43, which had been missing from the periodic table and which was expected to have properties similar to those of manganese and rhenium. The discovery of other artificial elements, astatine and francium, was facilitated in 1939-1940 by the prediction of their chemical properties. A little more than 50 years ago, in the spring of 1940, Edwin McMillan and Philip Abelson synthesized element 93, neptunium, and confirmed its uniqueness by chemical means. On August 30, 1940, Glenn Seaborg, Arthur Wahl, and the late Joseph Kennedy began their neutron irradiations of uranium nitrate hexahydrate. A few months later they synthesized element 94, later named plutonium, by observing the alpha particles emitted from uranium oxide targets that had been bombarded with deuterons. Shortly thereafter they proved that is was the second transuranium element by establishing its unique oxidation-reduction behavior. The symposium honored the scientists and engineers whose vision and dedication led to the discovery of the transuranium elements and to the understanding of the influence of 5f electrons on their electronic structure and bonding. This volume represents a record of papers presented at the symposium.

  13. JAC2D: A two-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

    SciTech Connect

    Biffle, J.H.; Blanford, M.L.

    1994-05-01

    JAC2D is a two-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equations. The method is implemented in a two-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. A four-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic/plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

  14. JAC3D -- A three-dimensional finite element computer program for the nonlinear quasi-static response of solids with the conjugate gradient method; Yucca Mountain Site Characterization Project

    SciTech Connect

    Biffle, J.H.

    1993-02-01

    JAC3D is a three-dimensional finite element program designed to solve quasi-static nonlinear mechanics problems. A set of continuum equations describes the nonlinear mechanics involving large rotation and strain. A nonlinear conjugate gradient method is used to solve the equation. The method is implemented in a three-dimensional setting with various methods for accelerating convergence. Sliding interface logic is also implemented. An eight-node Lagrangian uniform strain element is used with hourglass stiffness to control the zero-energy modes. This report documents the elastic and isothermal elastic-plastic material model. Other material models, documented elsewhere, are also available. The program is vectorized for efficient performance on Cray computers. Sample problems described are the bending of a thin beam, the rotation of a unit cube, and the pressurization and thermal loading of a hollow sphere.

  15. Cohesive Zone Model User Element

    SciTech Connect

    Tippetts, Trevor

    2007-04-17

    Cohesive Zone Model User Element (CZM UEL) is an implementation of a Cohesive Zone Model as an element for use in finite element simulations. CZM UEL computes a nodal force vector and stiffness matrix from a vector of nodal displacements. It is designed for structural analysts using finite element software to predict crack initiation, crack propagation, and the effect of a crack on the rest of a structure.

  16. Application of a finite element method for computing grazing incidence wave structure in an impedance tube - Comparison with experiment. [for duct liner aeroacoustic design

    NASA Technical Reports Server (NTRS)

    Lester, H. C.; Parrott, T. L.

    1979-01-01

    The acoustic performance of a liner specimen, in a grazing incidence impedance tube, is analyzed using a finite element method. The liner specimen was designed to be a locally reacting, two-degree-of-freedom type with the resistance and reactance provided by perforated facesheets and compartmented cavities. Measured and calculated wave structures are compared for both normal and grazing incidence from 0.3 to 1.2 kHz. A finite element algorithm was incorporated into an optimization loop in order to predict liner grazing incidence impedance from measured SWR and null position data. Results suggest that extended reaction effects may have been responsible for differences between normal and grazing incidence impedance estimates.

  17. Study of Superconvergence by a Computer-Based Approach: Superconvergence of the Gradient of the Displacement, The Strain and Stress in Finite Element Solutions for Plane Elasticity.

    DTIC Science & Technology

    1994-02-01

    LOO estimate and a superconvergence result for a Galerkin method for elliptic equations based on tensor products of piecewise polynomials, RAIRO Anal...Superconvergence of the gradient of finite element solutions, RAIRO Anal. Numir., 13 (1979), pp. 139-166. 11. R.Z. DAUTOV, A.V. LAPIN AND A.D...PDEs, 3 (1987), pp. 65-82. 15. M.T. NAKAo, Superconvergence of the gradient of Galerkin approzimations for elliptic problems, RAIRO Math. Model

  18. Elemental ZOO

    NASA Astrophysics Data System (ADS)

    Helser, Terry L.

    2003-04-01

    This puzzle uses the symbols of 39 elements to spell the names of 25 animals found in zoos. Underlined spaces and the names of the elements serve as clues. To solve the puzzle, students must find the symbols that correspond to the elemental names and rearrange them into the animals' names.

  19. Verification and benchmarking of MAGNUM-2D: a finite element computer code for flow and heat transfer in fractured porous media

    SciTech Connect

    Eyler, L.L.; Budden, M.J.

    1985-03-01

    The objective of this work is to assess prediction capabilities and features of the MAGNUM-2D computer code in relation to its intended use in the Basalt Waste Isolation Project (BWIP). This objective is accomplished through a code verification and benchmarking task. Results are documented which support correctness of prediction capabilities in areas of intended model application. 10 references, 43 figures, 11 tables.

  20. Injector element characterization methodology

    NASA Technical Reports Server (NTRS)

    Cox, George B., Jr.

    1988-01-01

    Characterization of liquid rocket engine injector elements is an important part of the development process for rocket engine combustion devices. Modern nonintrusive instrumentation for flow velocity and spray droplet size measurement, and automated, computer-controlled test facilities allow rapid, low-cost evaluation of injector element performance and behavior. Application of these methods in rocket engine development, paralleling their use in gas turbine engine development, will reduce rocket engine development cost and risk. The Alternate Turbopump (ATP) Hot Gas Systems (HGS) preburner injector elements were characterized using such methods, and the methodology and some of the results obtained will be shown.

  1. A boundary element alternating method for two-dimensional mixed-mode fracture problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Krishnamurthy, T.

    1992-01-01

    A boundary element alternating method, denoted herein as BEAM, is presented for two dimensional fracture problems. This is an iterative method which alternates between two solutions. An analytical solution for arbitrary polynomial normal and tangential pressure distributions applied to the crack faces of an embedded crack in an infinite plate is used as the fundamental solution in the alternating method. A boundary element method for an uncracked finite plate is the second solution. For problems of edge cracks a technique of utilizing finite elements with BEAM is presented to overcome the inherent singularity in boundary element stress calculation near the boundaries. Several computational aspects that make the algorithm efficient are presented. Finally, the BEAM is applied to a variety of two dimensional crack problems with different configurations and loadings to assess the validity of the method. The method gives accurate stress intensity factors with minimal computing effort.

  2. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  3. The Depth Limits of Eddy Current Testing for Defects: A Computational Investigation and Smooth-Shaped Defect Synthesis from Finite Element Optimization

    DTIC Science & Technology

    2015-04-22

    our inverse problem we need to know the characteristics of the defect for that field configuration. In design optimization, the problem geometry is...The computational process in inverse problem solution is shown in Fig.10. It requires solving for the vector of design parameters . We first...lines for the Numerical Model SMOOTH-SHAPED DEFECT In inverse problem design optimization, getting a practically manufacturable shape is

  4. NON-CONFORMING FINITE ELEMENTS; MESH GENERATION, ADAPTIVITY AND RELATED ALGEBRAIC MULTIGRID AND DOMAIN DECOMPOSITION METHODS IN MASSIVELY PARALLEL COMPUTING ENVIRONMENT

    SciTech Connect

    Lazarov, R; Pasciak, J; Jones, J

    2002-02-01

    Construction, analysis and numerical testing of efficient solution techniques for solving elliptic PDEs that allow for parallel implementation have been the focus of the research. A number of discretization and solution methods for solving second order elliptic problems that include mortar and penalty approximations and domain decomposition methods for finite elements and finite volumes have been investigated and analyzed. Techniques for parallel domain decomposition algorithms in the framework of PETC and HYPRE have been studied and tested. Hierarchical parallel grid refinement and adaptive solution methods have been implemented and tested on various model problems. A parallel code implementing the mortar method with algebraically constructed multiplier spaces was developed.

  5. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  6. Mobile Genetic Elements: In Silico, In Vitro, In Vivo

    PubMed Central

    Arkhipova, Irina R.; Rice, Phoebe A.

    2016-01-01

    Mobile genetic elements (MGEs), also called transposable elements (TEs), represent universal components of most genomes and are intimately involved in nearly all aspects of genome organization, function, and evolution. However, there is currently a gap between fast-paced TE discovery in silico, stimulated by exponential growth of comparative genomic studies, and a limited number of experimental models amenable to more traditional in vitro and in vivo studies of structural, mechanistic, and regulatory properties of diverse MGEs. Experimental and computational scientists came together to bridge this gap at a recent conference, “Mobile Genetic Elements: in silico, in vitro, in vivo,” held at the Marine Biological Laboratory (MBL) in Woods Hole, MA, USA. PMID:26822117

  7. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  8. A computational method for determining tissue material properties in ovine fracture calluses using electronic speckle pattern interferometry and finite element analysis.

    PubMed

    Steiner, Malte; Claes, Lutz; Simon, Ulrich; Ignatius, Anita; Wehner, Tim

    2012-12-01

    For numerical simulations of biological processes the assignment of reliable material properties is essential. Since literature data show huge variations for each parameter, this study presents a method for determining tissue properties straight from the investigated specimens by combining electronic speckle pattern interferometry (ESPI) with finite element (FE) analysis in a two-step parameter analysis procedure. ESPI displacement data from two mid-sagittal ovine fracture callus slices under 5 N compressive load were directly compared to data from FE simulations of the respective experimental setup. In the first step a parameter sensitivity analysis quantified the influence of single tissues on the mechanical behavior of the callus specimens. In the second step, material properties (i.e. Young's moduli and Poisson's ratios) for the most dominant material of each callus specimen were determined through a parameter sampling procedure minimizing the mean local deviations between the simulated (FE) and measured (ESPI) equivalent element strains. The resulting material properties showed reasonable ranges downsizing the variability of previous published values, especially for Young's modulus which was 1881 MPa for woven bone and 16 MPa for cartilage in average. In conclusion, a numerical method was developed to determine material properties straight from independent fracture callus specimens based on experimentally derived local mechanical conditions.

  9. Product Aspect Clustering by Incorporating Background Knowledge for Opinion Mining

    PubMed Central

    Chen, Yiheng; Zhao, Yanyan; Qin, Bing; Liu, Ting

    2016-01-01

    Product aspect recognition is a key task in fine-grained opinion mining. Current methods primarily focus on the extraction of aspects from the product reviews. However, it is also important to cluster synonymous extracted aspects into the same category. In this paper, we focus on the problem of product aspect clustering. The primary challenge is to properly cluster and generalize aspects that have similar meanings but different representations. To address this problem, we learn two types of background knowledge for each extracted aspect based on two types of effective aspect relations: relevant aspect relations and irrelevant aspect relations, which describe two different types of relationships between two aspects. Based on these two types of relationships, we can assign many relevant and irrelevant aspects into two different sets as the background knowledge to describe each product aspect. To obtain abundant background knowledge for each product aspect, we can enrich the available information with background knowledge from the Web. Then, we design a hierarchical clustering algorithm to cluster these aspects into different groups, in which aspect similarity is computed using the relevant and irrelevant aspect sets for each product aspect. Experimental results obtained in both camera and mobile phone domains demonstrate that the proposed product aspect clustering method based on two types of background knowledge performs better than the baseline approach without the use of background knowledge. Moreover, the experimental results also indicate that expanding the available background knowledge using the Web is feasible. PMID:27561001

  10. Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RAχFC): A Visual Basic computer code for calculating trace element and isotope variations of open-system magmatic systems

    NASA Astrophysics Data System (ADS)

    Bohrson, Wendy A.; Spera, Frank J.

    2007-11-01

    Volcanic and plutonic rocks provide abundant evidence for complex processes that occur in magma storage and transport systems. The fingerprint of these processes, which include fractional crystallization, assimilation, and magma recharge, is captured in petrologic and geochemical characteristics of suites of cogenetic rocks. Quantitatively evaluating the relative contributions of each process requires integration of mass, species, and energy constraints, applied in a self-consistent way. The energy-constrained model Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RaχFC) tracks the trace element and isotopic evolution of a magmatic system (melt + solids) undergoing simultaneous fractional crystallization, recharge, and assimilation. Mass, thermal, and compositional (trace element and isotope) output is provided for melt in the magma body, cumulates, enclaves, and anatectic (i.e., country rock) melt. Theory of the EC computational method has been presented by Spera and Bohrson (2001, 2002, 2004), and applications to natural systems have been elucidated by Bohrson and Spera (2001, 2003) and Fowler et al. (2004). The purpose of this contribution is to make the final version of the EC-RAχFC computer code available and to provide instructions for code implementation, description of input and output parameters, and estimates of typical values for some input parameters. A brief discussion highlights measures by which the user may evaluate the quality of the output and also provides some guidelines for implementing nonlinear productivity functions. The EC-RAχFC computer code is written in Visual Basic, the programming language of Excel. The code therefore launches in Excel and is compatible with both PC and MAC platforms. The code is available on the authors' Web sites http://magma.geol.ucsb.edu/and http://www.geology.cwu.edu/ecrafc) as well as in the auxiliary material.

  11. Approaches to high aspect ratio triangulations

    NASA Technical Reports Server (NTRS)

    Posenau, M.-A.

    1993-01-01

    In aerospace computational fluid dynamics calculations, high aspect ratio, or stretched, triangulations are necessary to adequately resolve the features of a viscous flow around bodies. In this paper, we explore alternatives to the Delaunay triangulation which can be used to generate high aspect ratio triangulations of point sets. The method is based on a variation of the lifting map concept which derives Delaunay triangulations from convex hull calculations.

  12. Models and Computational Methods for Dynamic Friction Phenomena. 1. Physical Aspects of Dynamic Friction. 2. Continuum Models and Variational Principles for Dynamic Friction. 3. Finite Element Models and Numerical Analysis

    DTIC Science & Technology

    1984-10-25

    Models and Numerical Analysis Research sponsored by the Air Force Office of Scientific Research (AFSC) under contract F49620-84-0024. The United States...frictional forces may depend upon histories of micro-tangential displacements of particles on the contact surface . Theories describing such...frictional forces developed on the contact surface appear to depend on the sliding velocity of one surface relative to another. To obtain reproducible

  13. Elemental health

    SciTech Connect

    Tonneson, L.C.

    1997-01-01

    Trace elements used in nutritional supplements and vitamins are discussed in the article. Relevant studies are briefly cited regarding the health effects of selenium, chromium, germanium, silicon, zinc, magnesium, silver, manganese, ruthenium, lithium, and vanadium. The toxicity and food sources are listed for some of the elements. A brief summary is also provided of the nutritional supplements market.

  14. Progressive Damage Analysis of Laminated Composite (PDALC) (A Computational Model Implemented in the NASA COMET Finite Element Code). 2.0

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.; Lo, David C.; Allen, David H.

    1998-01-01

    A method for analysis of progressive failure in the Computational Structural Mechanics Testbed is presented in this report. The relationship employed in this analysis describes the matrix crack damage and fiber fracture via kinematics-based volume-averaged damage variables. Damage accumulation during monotonic and cyclic loads is predicted by damage evolution laws for tensile load conditions. The implementation of this damage model required the development of two testbed processors. While this report concentrates on the theory and usage of these processors, a complete listing of all testbed processors and inputs that are required for this analysis are included. Sample calculations for laminates subjected to monotonic and cyclic loads were performed to illustrate the damage accumulation, stress redistribution, and changes to the global response that occurs during the loading history. Residual strength predictions made with this information compared favorably with experimental measurements.

  15. How to determine spiral bevel gear tooth geometry for finite element analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1991-01-01

    An analytical method was developed to determine gear tooth surface coordinates of face milled spiral bevel gears. The method combines the basic gear design parameters with the kinematical aspects for spiral bevel gear manufacturing. A computer program was developed to calculate the surface coordinates. From this data a 3-D model for finite element analysis can be determined. Development of the modeling method and an example case are presented.

  16. Psychosomatic Aspects of Cancer: An Overview.

    ERIC Educational Resources Information Center

    Murray, John B.

    1980-01-01

    It is suggested in this literature review on the psychosomatic aspects of cancer that psychoanalytic interpretations which focused on intrapsychic elements have given way to considerations of rehabilitation and assistance with the complex emotional reactions of patients and their families to terminal illness and death. (Author/DB)

  17. Cortical Neural Computation by Discrete Results Hypothesis.

    PubMed

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS

  18. Cortical Neural Computation by Discrete Results Hypothesis

    PubMed Central

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast

  19. In silico selection of an aptamer to estrogen receptor alpha using computational docking employing estrogen response elements as aptamer-alike molecules

    PubMed Central

    Ahirwar, Rajesh; Nahar, Smita; Aggarwal, Shikha; Ramachandran, Srinivasan; Maiti, Souvik; Nahar, Pradip

    2016-01-01

    Aptamers, the chemical-antibody substitute to conventional antibodies, are primarily discovered through SELEX technology involving multi-round selections and enrichment. Circumventing conventional methodology, here we report an in silico selection of aptamers to estrogen receptor alpha (ERα) using RNA analogs of human estrogen response elements (EREs). The inverted repeat nature of ERE and the ability to form stable hairpins were used as criteria to obtain aptamer-alike sequences. Near-native RNA analogs of selected single stranded EREs were modelled and their likelihood to emerge as ERα aptamer was examined using AutoDock Vina, HADDOCK and PatchDock docking. These in silico predictions were validated by measuring the thermodynamic parameters of ERα -RNA interactions using isothermal titration calorimetry. Based on the in silico and in vitro results, we selected a candidate RNA (ERaptR4; 5′-GGGGUCAAGGUGACCCC-3′) having a binding constant (Ka) of 1.02 ± 0.1 × 108 M−1 as an ERα-aptamer. Target-specificity of the selected ERaptR4 aptamer was confirmed through cytochemistry and solid-phase immunoassays. Furthermore, stability analyses identified ERaptR4 resistant to serum and RNase A degradation in presence of ERα. Taken together, an efficient ERα-RNA aptamer is identified using a non-SELEX procedure of aptamer selection. The high-affinity and specificity can be utilized in detection of ERα in breast cancer and related diseases. PMID:26899418

  20. Computing and Digital Media: A Subject-Based Aspect Report by Education Scotland on Provision in Scotland's Colleges on Behalf of the Scottish Funding Council. Transforming Lives through Learning

    ERIC Educational Resources Information Center

    Education Scotland, 2014

    2014-01-01

    This report evaluates college programmes which deliver education and training in computer and digital media technology, rather than in computer usage. The report evaluates current practice and identifies important areas for further development amongst practitioners. It provides case studies of effective practice and sets out recommendations for…

  1. Finite element shell instability analysis

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Formulation procedures and the associated computer program for finite element thin shell instability analysis are discussed. Data cover: (1) formulation of basic element relationships, (2) construction of solution algorithms on both the conceptual and algorithmic levels, and (3) conduction of numerical analyses to verify the accuracy and efficiency of the theory and related programs therein are described.

  2. Elemental Education.

    ERIC Educational Resources Information Center

    Daniel, Esther Gnanamalar Sarojini; Saat, Rohaida Mohd.

    2001-01-01

    Introduces a learning module integrating three disciplines--physics, chemistry, and biology--and based on four elements: carbon, oxygen, hydrogen, and silicon. Includes atomic model and silicon-based life activities. (YDS)

  3. Superheavy Elements

    ERIC Educational Resources Information Center

    Tsang, Chin Fu

    1975-01-01

    Discusses the possibility of creating elements with an atomic number of around 114. Describes the underlying physics responsible for the limited extent of the periodic table and enumerates problems that must be overcome in creating a superheavy nucleus. (GS)

  4. A combination of experimental and finite element analyses of needle-tissue interaction to compute the stresses and deformations during injection at different angles.

    PubMed

    Halabian, Mahdi; Beigzadeh, Borhan; Karimi, Alireza; Shirazi, Hadi Asgharzadeh; Shaali, Mohammad Hasan

    2016-12-01

    One of the main clinical applications of the needles is its practical usage in the femoral vein catheterization. Annually more than two million peoples in the United States are exposed to femoral vein catheterization. How to use the input needles into the femoral vein has a key role in the sense of pain in post-injection and possible injuries, such as tissue damage and bleeding. It has been shown that there might be a correlation between the stresses and deformations due to femoral injection to the tissue and the sense of pain and, consequently, injuries caused by needles. In this study, the stresses and deformations induced by the needle to the femoral tissue were experimentally and numerically investigated in response to an input needle at four different angles, i.e., 30°, 45°, 60°, and 90°, via finite element method. In addition, a set of experimental injections at different angles were carried out to compare the numerical results with that of the experimental ones, namely pain score. The results revealed that by increasing the angle of injection up to 60°, the strain at the interaction site of the needle-tissue is increased accordingly while a significant falling is observed at the angle of 90°. In contrast, the stress due to injection was decreased at the region of needle-tissue interaction with showing the lowest one at the angle of 90°. Experimental results were also well confirmed the numerical observations since the lowest pain score was seen at the angle of 90°. The results suggest that the most effective angle of injection would be 90° due to a lower amount of stresses and deformations compared to the other angles of injection. These findings may have implications not only for understating the stresses and deformations induced during injection around the needle-tissue interaction, but also to give an outlook to the doctors to implement the most suitable angle of injection in order to reduce the pain as well as post injury of the patients.

  5. Computational Toxicology as Implemented by the US EPA ...

    EPA Pesticide Factsheets

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T

  6. Element 117

    ScienceCinema

    None

    2016-09-30

    An international team of scientists from Russia and the United States, including two Department of Energy national laboratories and two universities, has discovered the newest superheavy element, element 117. The team included scientists from the Joint Institute of Nuclear Research (Dubna, Russia), the Research Institute for Advanced Reactors (Dimitrovgrad), Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Vanderbilt University, and the University of Nevada, Las Vegas.

  7. Element 117

    SciTech Connect

    2010-04-08

    An international team of scientists from Russia and the United States, including two Department of Energy national laboratories and two universities, has discovered the newest superheavy element, element 117. The team included scientists from the Joint Institute of Nuclear Research (Dubna, Russia), the Research Institute for Advanced Reactors (Dimitrovgrad), Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Vanderbilt University, and the University of Nevada, Las Vegas.

  8. CATIA - A computer aided design and manufacturing tridimensional system

    NASA Astrophysics Data System (ADS)

    Bernard, F.

    A properietary computer graphics-aided, three-dimensional interactive application (CATIA) design system is described. CATIA employs approximately 100 graphics displays, which are used by some 500 persons engaged in the definition of aircraft structures, structural strength analyses, the kinematic analysis of mobile elements, aerodynamic calculations, the choice of tooling in the machining of aircraft elements, and the programming of robotics. CATIA covers these diverse fields with a single data base. After a description of salient aspects of the system's hardware and software, graphics examples are given of the definition of curves, surfaces, complex volumes, and analytical tasks.

  9. Conversion of Osculating Orbital Elements to Mean Orbital Elements

    NASA Technical Reports Server (NTRS)

    Der, Gim J.; Danchick, Roy

    1996-01-01

    Orbit determination and ephemeris generation or prediction over relatively long elapsed times can be accomplished with mean elements. The most simple and efficient method for orbit determination, which is also known as epoch point conversion, performs the conversion of osculating elements to mean elements by iterative procedures. Previous epoch point conversion methods are restricted to shorter elapsed times with linear convergence. The new method presented in this paper calculates an analytic initial guess of the unknown mean elements from a first order theory of secular perturbations and computes a transition matrix with accurate numerical partials. It thereby eliminates the problem of an inaccurate initial guess and an identity transition matrix employed by previous methods. With a good initial guess of the unknown mean elements and an accurate transition matrix, converging osculating elements to mean elements can be accomplished over long elapsed times with quadratic convergence.

  10. A deflation based parallel algorithm for spectral element solution of the incompressible Navier-Stokes equations

    SciTech Connect

    Fischer, P.F.

    1996-12-31

    Efficient solution of the Navier-Stokes equations in complex domains is dependent upon the availability of fast solvers for sparse linear systems. For unsteady incompressible flows, the pressure operator is the leading contributor to stiffness, as the characteristic propagation speed is infinite. In the context of operator splitting formulations, it is the pressure solve which is the most computationally challenging, despite its elliptic origins. We seek to improve existing spectral element iterative methods for the pressure solve in order to overcome the slow convergence frequently observed in the presence of highly refined grids or high-aspect ratio elements.

  11. Spectral element simulations of laminar and turbulent flows in complex geometries

    NASA Technical Reports Server (NTRS)

    Karniadakis, George EM

    1989-01-01

    Spectral element methods are high-order weighted residual techniques based on spectral expansions of variables and geometry for the Navier-Stokes (NS) and transport equations. Here, practical aspects of these methods and their efficient implementation are examined, and several examples of flows in truly complex geometries are presented. The spectral element discretization for NS equations is introduced, and the convergence of the method is addressed. An efficient data management scheme is discussed in the context of parallel processing computations. The method is validated by comparing the spectral element solutions with the exact eigensolutions for the Orr-Sommerfeld equations in two and three dimensions. Computer-aided flow visualizations are presented for an impulsive flow past a sharp edge wedge. Three-dimensional states of channel flow disrupted by an array of cylindrical eddy promoters are studied, and the results of a direct simulation of the turbulent flow in a plane channel are presented.

  12. Cognitive Aspects of Prejudice

    ERIC Educational Resources Information Center

    Tajfel, Henri

    1969-01-01

    This paper is a slightly revised version of a contribution to a symposium on the "Biosocial Aspects of Race," held in London, September, 1968; symposium was published in the "Journal of Biosocial Science," Supplement No. 1, July, 1969. (RJ)

  13. A numerical investigation of nonlinear aeroelastic effects on flexible high aspect ratio wings

    NASA Astrophysics Data System (ADS)

    Garcia, Joseph Avila

    2002-01-01

    A nonlinear aeroelastic analysis that couples a nonlinear structural model with an Euler/Navier-Stokes flow solver is developed for flexible high aspect ratio wings. To model the nonlinear structural characteristics of flexible high aspect ratio wings, a two-dimensional geometric nonlinear methodology, based on a 6 degree-of-freedom (DOF) beam finite element, is extended to three dimensions based on a 12 DOF beam finite element. The three-dimensional analysis is developed in order to capture the nonlinear torsion-bending coupling, which is not accounted for by the two-dimensional nonlinear methodology. Validation of the three-dimensional nonlinear structural approach against experimental data shows that the approach accurately predicts the geometric nonlinear bending and torsion due to bending for configurations of general interest. Torsion is slightly overpredicted in extreme cases and higher order modeling is then required. The three-dimensional nonlinear beam model is then coupled with an Euler/Navier-Stokes computational fluid dynamics (CFD) analysis. Solving the equations numerically for the two nonlinear systems results in an increase in computational time and cost needed to perform the aeroelastic analysis. To improve the computational efficiency of the nonlinear aeroelastic analysis, the nonlinear structural approach uses a second-order accurate predictor-corrector methodology to solve for the displacements. Static aeroelastic results are presented for an unswept and swept high aspect ratio wing in the transonic flow regime, using the developed nonlinear aeroelastic methodology. Unswept wing results show a reversal in twist due to the nonlinear torsion-bending coupling effects. Specifically, the torsional moments due to drag become large enough to cause the wing twist rotations to washin the wing tips, while the linear results show a washout twist rotation. The nonlinear twist results are attributed to the large bending displacements coupled with the large

  14. Computer surety: computer system inspection guidance. [Contains glossary

    SciTech Connect

    Not Available

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  15. High Aspect Ratio Wrinkles

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Cheng; Crosby, Alfred

    2015-03-01

    Buckling-induced surface undulations are widely found in living creatures, for instance, gut villi and the surface of flower petal cells. These undulations provide unique functionalities with their extremely high aspect ratios. For the synthetic systems, sinusoidal wrinkles that are induced by buckling a thin film attached on a soft substrate have been proposed to many applications. However, the impact of the synthetic wrinkles have been restricted by limited aspect ratios, ranging from 0 to 0.35. Within this range, wrinkle aspect ratio is known to increase with increasing compressive strain until a critical strain is reached, at which point wrinkles transition to localizations, such as folds or period doublings. Inspired by the living creatures, we propose that wrinkles can be stabilized in high aspect ratio by manipulating the strain energy in the substrate. We experimentally demonstrate this idea by forming a secondary crosslinking network in the wrinkled surface and successfully achieve aspect ratio as large as 0.8. This work not only provides insights for the mechanism of high aspect ratio structures seen in living creatures, but also demonstrates significant promise for future wrinkle-based applications.

  16. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  17. Instructional Aspects of Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Pieters, Jules M., Ed.

    This collection contains three papers addressing the instructional aspects of intelligent tutoring systems (ITS): (1) "Some Experiences with Two Intelligent Tutoring Systems for Teaching Computer Programming: Proust and the LISP-Tutor" (van den Berg, Merrienboer, and Maaswinkel); (2) "Some Issues on the Construction of Cooperative…

  18. A method for determining spiral-bevel gear tooth geometry for finite element analysis

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Litvin, Faydor L.

    1991-01-01

    An analytical method was developed to determine gear tooth surface coordinates of face-milled spiral bevel gears. The method uses the basic gear design parameters in conjunction with the kinematical aspects of spiral bevel gear manufacturing machinery. A computer program, SURFACE, was developed. The computer program calculates the surface coordinates and outputs 3-D model data that can be used for finite element analysis. Development of the modeling method and an example case are presented. This analysis method could also find application for gear inspection and near-net-shape gear forging die design.

  19. Mechanical aspects of CO₂ angiography.

    PubMed

    Corazza, Ivan; Rossi, Pier Luca; Feliciani, Giacomo; Pisani, Luca; Zannoli, Sebastiano; Zannoli, Romano

    2013-01-01

    The aim of this paper is to clarify some physical-mechanical aspects involved in the carbon dioxide angiography procedure (CO₂ angiography), with a particular attention to a possible damage of the vascular wall. CO₂ angiography is widely used on patients with iodine intolerance. The injection of a gaseous element, in most cases manually performed, requires a long training period. Automatic systems allow better control of the injection and the study of the mechanical behaviour of the gas. CO₂ injections have been studied by using manual and automatic systems. Pressures, flows and jet shapes have been monitored by using a cardiovascular mock. Photographic images of liquid and gaseous jet have been recorded in different conditions, and the vascular pressure rises during injection have been monitored. The shape of the liquid jet during the catheter washing phase is straight in the catheter direction and there is no jet during gas injection. Gas bubbles are suddenly formed at the catheter's hole and move upwards: buoyancy is the only governing phenomenon and no bubbles fragmentation is detected. The pressure rise in the vessel depends on the injection pressure and volume and in some cases of manual injection it may double the basal vascular pressure values. CO₂ angiography is a powerful and safe procedure which diffusion will certainly increase, although some aspects related to gas injection and chamber filling are not jet well known. The use of an automatic system permits better results, shorter training period and limitation of vascular wall damage risk.

  20. Displacement and stress analysis of laminated composite plates using an eight-node quasi-conforming solid-shell element

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Guangyu; Wang, Xiaodan

    2017-01-01

    This paper presents the efficient modeling and analysis of laminated composite plates using an eightnode quasi-conforming solid-shell element, named as QCSS8. The present element QCSS8 is not only lockingfree, but highly computational efficiency as it possesses the explicit element stiffness matrix. All the six components of stresses can be evaluated directly by QCSS8 in terms of the 3-D constitutive equations and the appropriately assumed element strain field. Several typical numerical examples of laminated plates are solved to validate QCSS8, and the resulting values are compared with analytical solutions and the numerical results of solid/solidshell elements of commercial codes computed by the present authors in which fine meshes were used. The numerical results show that QCSS8 can give accurate displacements and stresses of laminated composite plates even with coarse meshes. Furthermore, QCSS8 yields also accurate transverse normal strain which is very important for the evaluation of interlaminar stresses in laminated plates. Since each lamina of laminated composite plates can be modeled naturally by one or a few layers of solidshell elements and a large aspect ratio of element edge to thickness is allowed in solid-shell elements, the present solid-shell element QCSS8 is extremely appropriate for the modeling of laminated composite plates.

  1. Computer-aided design and computer science technology

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Voigt, S. J.

    1976-01-01

    A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.

  2. Mercury, elemental

    Integrated Risk Information System (IRIS)

    Mercury , elemental ; CASRN 7439 - 97 - 6 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinoge

  3. Element Research.

    ERIC Educational Resources Information Center

    Herald, Christine

    2001-01-01

    Describes a research assignment for 8th grade students on the elements of the periodic table. Students use web-based resources and a chemistry handbook to gather information, construct concept maps, and present the findings to the full class using the mode of their choice: a humorous story, a slideshow or gameboard, a brochure, a song, or skit.…

  4. Requirements Engineering and Aspects

    NASA Astrophysics Data System (ADS)

    Yu, Yijun; Niu, Nan; González-Baixauli, Bruno; Mylopoulos, John; Easterbrook, Steve; Do Prado Leite, Julio Cesar Sampaio

    A fundamental problem with requirements engineering (RE) is to validate that a design does satisfy stakeholder requirements. Some requirements can be fulfilled locally by designed modules, where others must be accommodated globally by multiple modules together. These global requirements often crosscut with other local requirements and as such lead to scattered concerns. We explore the possibility of borrowing concepts from aspect-oriented programming (AOP) to tackle these problems in early requirements. In order to validate the design against such early aspects, we propose a framework to trace them into coding and testing aspects. We demonstrate the approach using an open-source e-commerce platform. In the conclusion of this work, we reflect on the lessons learnt from the case study on how to fit RE and AOP research together.

  5. Mathematical and Computational Aspects of Multiscale Materials Modeling, Mathematics-Numerical analysis, Section II.A.a.3.4, Conference and symposia organization II.A.2.a

    DTIC Science & Technology

    2015-02-04

    The subjects covered in the past included, but were not limited to, geomaterials, energetic materials, plastic deformation in metals and alloys...understanding plastic deformation Mathematical models Computational techniques currently used Multiscale modeling and multiscale experiments Scale...coupling Role of instabilities • Outline the outstanding challenges in both plasticity and multiscale modeling in general • Promote interactions

  6. Organisational aspects of care.

    PubMed

    Bloomfield, Jacqueline; Pegram, Anne

    2015-03-04

    Organisational aspects of care, the second essential skills cluster, identifies the need for registered nurses to systematically assess, plan and provide holistic patient care in accordance with individual needs. Safeguarding, supporting and protecting adults and children in vulnerable situations; leading, co-ordinating and managing care; functioning as an effective and confident member of the multidisciplinary team; and managing risk while maintaining a safe environment for patients and colleagues, are vital aspects of this cluster. This article discusses the roles and responsibilities of the newly registered graduate nurse. Throughout their education, nursing students work towards attaining this knowledge and these skills in preparation for their future roles as nurses.

  7. Computational mechanics

    SciTech Connect

    Raboin, P J

    1998-01-01

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

  8. New alternating direction procedures in finite element analysis based upon EBE approximate factorizations. [element-by-element

    NASA Technical Reports Server (NTRS)

    Hughes, T. J. R.; Winget, J.; Levit, I.; Tezduyar, T. E.

    1983-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in computational mechanics. A variety of techniques are compared on problems of structural mechanics, heat conduction and fluid mechanics. The results obtained suggest considerable potential for the methods described.

  9. Control aspects of the brushless doubly-fed machine

    NASA Astrophysics Data System (ADS)

    Lauw, H. K.; Krishnan, S.

    1990-09-01

    This report covers the investigations into the control aspects of a variable-speed generation (VSG) system using a brushless double-fed generator excited by a series-resonant converter. The brushless double-fed machine comprises two sets of stator 3-phase systems which are designed with common windings. The rotor is a cage rotor resembling the low-cost and robust squirrel cage of a conventional induction machine. The system was actually designed and set up in the Energy Laboratory of the Department of Electrical and Computer Engineering at Oregon State University. The series-resonant converter designed to achieve effective control for variable-speed generation with the brushless doubly-fed generator was adequate in terms of required time response and regulation as well as in providing for adequate power quality. The three elements of the VSG controller, i.e., voltage or reactive power controller, the efficiency maximizer and the stabilizer, could be designed using conventional microprocessor elements with a processing time well within the time period required for sampling the variables involved with executing the control tasks. The report treats in detail the stability problem encountered in running the machine at certain speed regions, even if requirements for steady-state stability are satisfied. In this unstable region, shut down of the VSG system is necessary unless proper stabilization controls are provided for. The associated measures to be taken are presented.

  10. Charles Darwin and Evolution: Illustrating Human Aspects of Science

    ERIC Educational Resources Information Center

    Kampourakis, Kostas; McComas, William F.

    2010-01-01

    Recently, the nature of science (NOS) has become recognized as an important element within the K-12 science curriculum. Despite differences in the ultimate lists of recommended aspects, a consensus is emerging on what specific NOS elements should be the focus of science instruction and inform textbook writers and curriculum developers. In this…

  11. Sociological Aspects of Deafness.

    ERIC Educational Resources Information Center

    World Federation of the Deaf, Rome (Italy).

    Nine conference papers treat the sociological aspects of deafness. Included are "Individuals Being Deaf and Blind and Living with a Well Hearing Society" by A. Marx (German Federal Republic), "A Deaf Man's Experiences in a Hearing World" by A. B. Simon(U.S.A.), "Problem of Text Books and School Appliances for Vocational…

  12. Aspects of Marine Ecology.

    ERIC Educational Resources Information Center

    Awkerman, Gary L.

    This publication is designed for use in standard science curricula to develop oceanologic manifestations of certain science topics. Included are teacher guides, student activities, and demonstrations to impart ocean science understanding, specifically, aspects of marine ecology, to high school students. The course objectives include the ability of…

  13. Global aspects of monsoons

    NASA Technical Reports Server (NTRS)

    Murakami, T.

    1985-01-01

    Recent developments are studied in three areas of monsoon research: (1) global aspects of the monsoon onset, (2) the orographic influence of the Tibetan Plateau on the summer monsoon circulations, and (3) tropical 40 to 50 day oscillations. Reference was made only to those studies that are primarily based on FGGE Level IIIb data. A brief summary is given.

  14. Medical Aspects of Surfing.

    ERIC Educational Resources Information Center

    Renneker, Mark

    1987-01-01

    The medical aspects of surfing include ear and eye injuries and sprains and strains of the lower back and neck, as well as skin cancer from exposure to the sun. Treatment, rehabilitation, and prevention of these problems are discussed. Surfing is recommended as part of an exercise program for reasonably healthy people. (Author/MT)

  15. FUEL ELEMENT

    DOEpatents

    Fortescue, P.; Zumwalt, L.R.

    1961-11-28

    A fuel element was developed for a gas cooled nuclear reactor. The element is constructed in the form of a compacted fuel slug including carbides of fissionable material in some cases with a breeder material carbide and a moderator which slug is disposed in a canning jacket of relatively impermeable moderator material. Such canned fuel slugs are disposed in an elongated shell of moderator having greater gas permeability than the canning material wherefore application of reduced pressure to the space therebetween causes gas diffusing through the exterior shell to sweep fission products from the system. Integral fission product traps and/or exterior traps as well as a fission product monitoring system may be employed therewith. (AEC)

  16. Chapter on Distributed Computing

    DTIC Science & Technology

    1989-02-01

    MASSACHUSETTS LABORATORY FOR INSTITUTE OF COMPUTER SCIENCE TECHNOLOGY ("D / o O MIT/LCS/TM-384 CHAPTER ON DISTRIBUTED COMPUTING Leslie Lamport Nancy...22217 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Miude Secuwity Ciaifiation) Chapter on Distributed Computing 12. PERSONAL AUTHOR(S) Lamport... distributed computing , distributed systems models, dis- tributed algorithms, message-passing, shared variables, 19. UBSTRACT (Continue on reverse if

  17. Implementations of adaptive associative optical computing elements

    NASA Astrophysics Data System (ADS)

    Fisher, Arthur D.; Lee, John N.; Fukuda, Robert C.

    1986-01-01

    The present optical implementations for heteroassociative memory modules, which are capable of real time adaptive learning, are pertinent to the eventual construction of large, multimodule associative/neural network architectures that can consider problems in the acquisition, transformation, matching/recognition, and manipulation of large amounts of data in parallel. These modules offer such performance features as convergence to the least-squares-optimum pseudoinverse association, accumulative and gated learning, forgetfulness of unused associations, resistance to dynamic-range saturation, and compensation of optical system aberrations. Optics uniquely furnish the massive parallel interconnection paths required to cascade and interconnect a number of modules to form the more sophisticated multiple module architectures.

  18. Subduction modelling with ASPECT

    NASA Astrophysics Data System (ADS)

    Glerum, Anne; Thieulot, Cédric; Spakman, Wim; Quinquis, Matthieu; Buiter, Susanne

    2013-04-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a promising new code designed for modelling thermal convection in the mantle (Kronbichler et al. 2012). The code uses state-of-the-art numerical methods, such as high performance solvers and adaptive mesh refinement. It builds on tried-and-well-tested libraries and works with plug-ins allowing easy extension to fine-tune it to the user's specific needs. We make use of the promising features of ASPECT, especially Adaptive Mesh Refinement (AMR), for modelling lithosphere subduction in 2D and 3D geometries. The AMR allows for mesh refinement where needed and mesh coarsening in regions less important to the parameters under investigation. In the context of subduction, this amounts to having very small grid cells at material interfaces and larger cells in more uniform mantle regions. As lithosphere subduction modelling is not standard to ASPECT, we explore the necessary adaptive grid refinement and test ASPECT with widely accepted benchmarks. We showcase examples of mechanical and thermo-mechanical oceanic subduction in which we vary the number of materials making up the overriding and subducting plates as well as the rheology (from linear viscous to more complicated rheologies). Both 2D and 3D geometries are used, as ASPECT easily extends to three dimensions (Kronbichler et al. 2012). Based on these models, we discuss the advection of compositional fields coupled to material properties and the ability of AMR to trace the slab's path through the mantle. Kronbichler, M., T. Heister and W. Bangerth (2012), High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophysical Journal International, 191, 12-29.

  19. Robust multigrid for high-order discontinuous Galerkin methods: A fast Poisson solver suitable for high-aspect ratio Cartesian grids

    NASA Astrophysics Data System (ADS)

    Stiller, Jörg

    2016-12-01

    We present a polynomial multigrid method for nodal interior penalty and local discontinuous Galerkin formulations of the Poisson equation on Cartesian grids. For smoothing we propose two classes of overlapping Schwarz methods. The first class comprises element-centered and the second face-centered methods. Within both classes we identify methods that achieve superior convergence rates, prove robust with respect to the mesh spacing and the polynomial order, at least up to P = 32. Consequent structure exploitation yields a computational complexity of O (PN), where N is the number of unknowns. Further we demonstrate the suitability of the face-centered method for element aspect ratios up to 32.

  20. Uncertainty in Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Hemsch, M. J.; Morrison, J. H.

    2003-01-01

    An approach is presented to treat computational aerodynamics as a process, subject to the fundamental quality assurance principles of process control and process improvement. We consider several aspects affecting uncertainty for the computational aerodynamic process and present a set of stages to determine the level of management required to meet risk assumptions desired by the customer of the predictions.